1. 首页>资讯 > 资讯

op59s手机多少钱(Cephadm全功能安装Ceph)

作者:马夕一
2022-03-28 10:53
资讯


  • 目录
  • 2.1cephadm介绍
  • 2.2 相关概念介绍
  • 2.3放置规范介绍
  • 3.1部署环境要求
  • 3.2 本次实验环境
  • 5.1下载cephadm脚本
  • 5.2引导单群集Ceph安装
  • 5.3 添加主机
  • 5.4 添加OSD
  • 5.5 查看Ceph部署服务
  • 5.6 部署RGW
  • 5.7 部署Cephfs
  • 5.8 部署NFS
  • 5.9 部署iSCSi
  • 5.10 添加rbd-mirror
  • 5.11 部署Cephfs-mirror
  • 5.12 添加ingress

1.前言

在一年前的写的文章中我提到了cephadm安装工具,那会刚出来有不少功能还无法安装,经过一年的时间的等待,一个月前发再ceph 16版本基本功能都差不多了,就开始了cephadm的研究,但因为是想写一个比较全面的文章,不像现有网上的文章只有基本的一些功能,所以在研究iscsi和ingress安装的时候遇到了些问题,安装过数十次也未解决,在前段时候ceph出了最新的ceph 16.2.5版本,终于所有功能都正常了,才有了本篇文章。

PS:个人感悟,如没有必要还是不要研究比较新的开源的软件,太折腾太累,新的东西总可能有些BUG。

2.cephadm介绍

22.1cephadm介绍

Cephadm 是随着 Ceph 新版本 v15.2.0(Octopus)发布的安装工具,并且不支持 Ceph的旧版本,Ceph中已经Cephadm 不依赖于外部配置工具,如 Ansible、 Rook 和 Salt,它通过 SSH 将管理器守护进程连接到主机来实现这一点。管理器守护进程可以添加、删除和更新 Ceph 容器。

Redhat Ceph 5和SUSE Ceph 7的最新版本中也已经使用了Cephadm,所以Cephadm是Ceph的安装工具的未来,并且现在Cephadm也差不多已经成熟可用了,像NFS iSCSI服务官方也宣称已经稳定可用,未来也会加入CIFS等服务,其他功能也在不停地完善中,所以是时候系统的学习下Cephadm了。

通过下图可以看出ceph新版本中的部署架构,通过一个编排接口,可以向下对接rook和cephadm两种编排工具,向上通过命令行"ceph orch"或ceph dashboard实现编排。

其实这个编排和我们理解中的容器编排是相同的,我们可以理解成Cephadm就是一个迷你版本的kubernetes,当然rook有点不同,rook可以理解成一种对接kubernetes编排的中间层,实际编排还是由kubernetes实现,而Cephadm是自己就实现了一些调度编排的逻辑。

Cephadm全功能安装Ceph Pacific

cephadm绘图-第 4 页

两种编排工具支持的功能上是有些差别的,具体差别如下表所示:

Cephadm全功能安装Ceph Pacific


  • ⚪ = not yet implemented
  • ❌ = not applicable
  • ✔ = implemented

Cepdm 管理 Ceph 集群的整个生命周期。这个生命周期从引导过程开始,当 cephdm 在单个节点上创建一个单节点的 Ceph 集群时。这个集群由一个MON和一个MGR组成。

在创建了单节点的Ceph集群后,Cephadm 然后会使用编排接口(“ day 2”命令)扩展集群,添加相应主机并部署相应的 Ceph 守护进程(daemons )和服务( services)。这个动作可以通过 Ceph 命令行界面(CLI)或者通过仪表板(GUI)来执行。

cephadm还在开发完善中,有些功能是文档方面没有很完善,比如,RWG,有些功能方面是没有最终确定思路,将来可能有大的变化,比如,ingress(之前叫rgw-ha),cephfs-mirror等。

32.2 相关概念介绍

上文中提到了两个名词servicet和daemon,这两个概念在cephadm中很重要,会贯穿整个ceph的生命周期的管理中。在之前的ceph-deploy中没有这个概念,当我们要部署三个MON的话,我们会在三台主机上分别部署三个MON进程,在cephadm中由于引入了编排,当我们部署一个高可用的MON服务时,会说我要部署一个mon service,但实际还是得由三个容器去承载mon,这里还引出另一个问题,这三个承载MON的容器是启动在一台主机上还是启动在三台主机上,这个就是由“放置规范”决定的了(后面我们会将放置规范),servcie和deamon的关系其实就如下图所示:

Cephadm全功能安装Ceph Pacific

cephadm-servcie-第 2 页

42.3放置规范介绍

上文中提示放置规范也就是部署一个Servcie,会有几个Deamon,这几个Deamon是在哪几台主机上,这就是放置规范需要规定的内容,编排调度再根据放置规范具体调度,cephadm总共有5种放置规范,如下:

  1. 明确指定匹配: 就是直接指定主机名,Deamon将直接部署到该主机上。
  2. 标签匹配: 就是在加主机的时候给主机加个标签,部署的时候Deamon将会部署到打了标签的主机上。
  3. 模式匹配: 比如prometheus是需要在每一台主机上部署的,这时候可以使用*匹配所示,那会就会在的所有主机上部署Deamon。
  4. 指定数量匹配: 就是直接指定部署Deamon的数量,由cephadm调度机制自动决定部署到哪些主机上。
  5. 还有一种特殊的匹配,就是禁用自动部署Deamon服务,改为人工部署。
  6. 以上5种模式都可以能完命令或YAM配置文件的方式定义调度放置规范。
Cephadm全功能安装Ceph Pacific

4456

让我们实现看下Cephadm服务的展示,下图中PLACEMENT 中*表示部署在所有主机, count:1 表示使用了指定数量匹配,ceph1;ceph2;ceph3 表示直接使用了明确指定匹配, RUNNING表示实现期望运行的Deamon和正在运行的Deamon比例,如mon显示3/5 表示mon默认指定了5个Deamon,但现在只有3台主机,所有只有3个在运行,当然这个期望数量也是可以修改的, PORTS 表示该Service暴露的IP和端口。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS                      RUNNING  REFRESHED  AGE  PLACEMENT          
alertmanager               ?:9093,9094                    1/1  4m ago     3d   count:1            
crash                                                     3/3  4m ago     3d   *                  
grafana                    ?:3000                         1/1  4m ago     3d   count:1            
ingress.nfs.nfs            192.168.149.201:2050,1968      6/6  4m ago     6h   count:3            
ingress.rgw.rgw            192.168.149.200:8080,1967      6/6  4m ago     6h   count:3            
iscsi.gw                                                  3/3  4m ago     7h   ceph1;ceph2;ceph3  
mds.cephfs                                                3/3  4m ago     7h   count:3            
mgr                                                       2/2  4m ago     3d   count:2            
mon                                                       3/5  4m ago     3d   count:5            
nfs.nfs                                                   3/3  4m ago     7h   count:3            
node-exporter              ?:9100                         3/3  4m ago     3d   *                  
osd.all-available-devices                                9/12  4m ago     3d   *                  
prometheus                 ?:9095                         1/1  4m ago     3d   count:1            
rbd-mirror                                                3/3  4m ago     7h   count:3            
rgw.rgw                    ?:80                           3/3  4m ago     7h   count:3   
Cephadm全功能安装Ceph Pacific

注意事项

如果想修改默认mon数量,可以使用ceph orch appy mon 3,如果想单独指定mon部署到某几台主机上,想禁用自动部署,可以使用ceph orch apply mon --unmanaged

3.部署环境要求

53.1部署环境要求

Cehpadm需要有如下软件安装在系统中。

  • Python 3
  • Systemd
  • Podman or Docker(如果使用podman会有兼容要求,兼容要求下图所示:) image-20210717182016648
  • chrony or NTP
  • LVM2

63.2 本次实验环境

序号 类型 版本 1 OS CentOS 8.3(mini) 2 Podman 3.0.2-dev 3 Ceph Pacific (16.2.5) 4 Python 3.6.8

序号 主机名 磁盘(20GB) 角色 1 ceph1 sdb,sdd,sdc cephadm,mon,mgr,osd,rgw,nfs,cephfs,iscsi,prometheus,grafana,rbd-mirror,cephfs-mirror,nfs-ingress,rgw-ingress 2 ceph2 sdb,sdd,sdc mon,mgr,osd,rgw,nfs,cephfs,iscsi,prometheus,grafana,rbd-mirror,cephfs-mirror,nfs-ingress,rgw-ingress 3 ceph3 sdb,sdd,sdc mon,mgr,osd,rgw,nfs,cephfs,iscsi,prometheus,grafana,rbd-mirror,cephfs-mirror,nfs-ingress,rgw-ingress

4.部署准备工作

lvm因为系统自带的都有,所以就不用单独安装了,

#dnf install epel-release -y
#dnf install python3 -y
#dnf install podman -y
#dnf install -y chrony
#systemctl start chronyd && systemctl enable chronyd
Cephadm全功能安装Ceph Pacific

注意事项

chrony时间服务为必须安装,具体有2点原因:1为不安装在添加主机的时候会报错,2为即使安装成功ceph -s会也提示时间不同步!

关闭防火墙和SELinux

#systemctl start chronyd && systemctl enable chronyd
#systemctl disable firewalld && systemctl stop firewalld
#setenforce 0
#sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

(可选)如果在安装系统的时候就设置过短主机名,就可以不用这一步骤。

#hostnamectl set-hostname ceph1
#hostnamectl set-hostname ceph2
#hostnamectl set-hostname ceph3
Cephadm全功能安装Ceph Pacific

注意事项

cephadm需要主机名为短主机名,不能为FQDN,否则在添加主机会报错!

添加hosts文件中的主机名和IP关系,主机名需要和上面一致

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.149.128 ceph1
192.168.149.145 ceph2
192.168.149.146 ceph3

配置时间同步

配置Ceph1节点为ntp server。

#vi /etc/chrony.conf
allow 192.168.123.0/24
#systemctl restart chronyd

配置其它Ceph2,Ceph3节点为ntp client。

#vi /etc/chrony.conf
server ceph-admin iburst
#systemctl restart chronyd

在Ceph2和Ceph3确认配置是否成功。

#chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ceph-admin                      3   9     0   68m    +39us[  +45us] +/-   18ms
^- time.cloudflare.com           3  10   352   49m  +5653us[+5653us] +/-   71ms
^? de-user.deepinid.deepin.>     3  10    21   21m  -2286us[-2286us] +/-   97ms
^? 2402:f000:1:416:101:6:6:>     0   6     0     -     +0ns[   +0ns] +/-    0ns

5.部署Ceph

75.1下载cephadm脚本

下载cephadm脚本,并安装相应版本的容器源。

#curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
#chmod +x cephadm
#./cephadm add-repo --release pacific
#./cephadm install
#./cephadm install  ceph-common
Cephadm全功能安装Ceph Pacific

注意事项

官方文档中还提到了另一种安装cephadm方式,就是通过dnf install -y cephadm安装,实践证明最好不要使用这种方式,这种方式安装的cephadm可能不是最新版本的,但cephadm去拉的容器版本又是最新的,会导致两个版本不一致!

查看编排后端是cephadm,如果是使用的rook这里后端显示的就是rook。

[root@ceph1 ~]#  ceph orch status
Backend: cephadm
Available: Yes
Paused: No

85.2引导单群集Ceph安装

开始通过cephadm “引导”一个单节点Ceph集群起来,示意图如下:

Cephadm全功能安装Ceph Pacific

333

[root@ceph1 ~]# cephadm bootstrap --mon-ip 192.168.149.128
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman|docker (/usr/bin/podman) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 36e7a21c-e3f7-11eb-8960-000c299df6ef
Verifying IP 192.168.149.128 port 3300 ...
Verifying IP 192.168.149.128 port 6789 ...
Mon IP `192.168.149.128` is in CIDR network `192.168.149.0/24`
- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image docker.io/ceph/ceph:v16...
Ceph version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.149.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host ceph1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Enabling mgr prometheus module...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 13...
mgr epoch 13 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:

      URL: https://ceph1:8443/
     User: admin
 Password: dqhiov5x4v

Enabling client.admin keyring and conf on hosts with "admin" label
You can access the Ceph CLI with:

 sudo /usr/sbin/cephadm shell --fsid 36e7a21c-e3f7-11eb-8960-000c299df6ef -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

 ceph telemetry on

For more information see:

 https://docs.ceph.com/docs/pacific/mgr/telemetry/

Bootstrap complete.

引导完成一个单节点群集,程序会做如下事情:

  • 在本地主机上为新集群创建monitor 和 manager daemon守护程序。
  • 为Ceph集群生成一个新的SSH密钥,并将其添加到root用户的/root/.ssh/authorized_keys文件中。
  • 将与新群集进行通信所需的最小配置文件保存到/etc/ceph/ceph.conf。
  • 向/etc/ceph/ceph.client.admin.keyring写入client.admin管理secret key的副本。
  • 将public key的副本写入/etc/ceph/ceph.pub。

完成后记录以上了IP以及用户和密码,打开Ceph Dashboard并根据提示修改密码,打开后提示要激活计量模块。

(可选)如果忘记记录密码可以通过以下方法重置密码(将密码写入password文件中,通过命令导入密码)

ceph dashboard ac-user-set-password admin -i password 
{"username": "admin", "password": "$2b$12$6oFrEpssXCzLnKTWQy5fM.YZwlHjn8CuQRdeSSJR9hBGgVuwGCxoa", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1620495653, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": false}
Cephadm全功能安装Ceph Pacific

image-20210717120935815

根据向导激活完成。

Cephadm全功能安装Ceph Pacific

image-20210717121059540

如果Ceph Dashboard中错过了启用,也可以使用命令启用,命令是“ceph telemetry on --license sharing-1-0”。

95.3 添加主机

上文中提示了在引导成功单节点Ceph群集后会引导程序会将public key的副本写入/etc/ceph/ceph.pub,在添加主机节点前需要将该key分发到要加入群集的主机上,示意图如下所示:

Cephadm全功能安装Ceph Pacific

444

[root@ceph1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
The authenticity of host 'ceph2 (192.168.149.145)' can't be established.
ECDSA key fingerprint is SHA256:1fioQmugbtBCiRuwNNKr/aa3Z/hm5zeqUrIfOZi2nS8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
root@ceph2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@ceph2'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
The authenticity of host 'ceph3 (192.168.149.146)' can't be established.
ECDSA key fingerprint is SHA256:eBmb4q2ptVYS55njTzmQYCNo4p3yguNi85nHyAuR4XU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
root@ceph3's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@ceph3'"
and check to make sure that only the key(s) you wanted were added.

Cephadm全功能安装Ceph Pacific

111


[root@ceph1 ~]# ceph orch host add ceph2 192.168.149.145
Added host 'ceph2' with addr '192.168.149.145'
[root@ceph1 ~]# ceph orch host add ceph3 192.168.149.146
Added host 'ceph3' with addr '192.168.149.146'

Cephadm全功能安装Ceph Pacific

注意事项

注意这里添加主机有时候不用写IP地址是不能添加的,网上有些文章是没写IP的,但有些情况下不加IP会报错,所以还是直接加上,并且官方文档中也是加的IP的!

[root@ceph1 ~]# ceph orch host ls
HOST   ADDR             LABELS  STATUS  
ceph1  192.168.149.128  _admin          
ceph2  192.168.149.145                  
ceph3  192.168.149.146         

105.4 添加OSD

添加OSD需求满足以下所有条件:

  • 设备必须没有分区。
  • 设备不得具有任何LVM状态。
  • 不得安装设备。
  • 该设备不得包含文件系统。
  • 该设备不得包含Ceph BlueStore OSD。
  • 设备必须大于5 GB。

添加OSD有2种方式,1为自动添加所有满足条件的OSD。

#ceph orch apply osd --all-available-devices

2为通过手工指定的方式添加OSD。

 #ceph orch daemon add osd ceph1:/dev/sdb
Cephadm全功能安装Ceph Pacific

222

本次使用第一种自动部署的方式,部署完成后查看设备列表,显示为NO就完成了。

[root@ceph1 ~]# ceph orch device ls
Hostname  Path      Type  Serial  Size   Health   Ident  Fault  Available  
ceph1     /dev/sdb  hdd           21.4G  Unknown  N/A    N/A    No         
ceph1     /dev/sdc  hdd           21.4G  Unknown  N/A    N/A    No         
ceph1     /dev/sdd  hdd           21.4G  Unknown  N/A    N/A    No         
ceph2     /dev/sdb  hdd           21.4G  Unknown  N/A    N/A    No         
ceph2     /dev/sdc  hdd           21.4G  Unknown  N/A    N/A    No         
ceph2     /dev/sdd  hdd           21.4G  Unknown  N/A    N/A    No         
ceph3     /dev/sdb  hdd           21.4G  Unknown  N/A    N/A    No         
ceph3     /dev/sdc  hdd           21.4G  Unknown  N/A    N/A    No         
ceph3     /dev/sdd  hdd           21.4G  Unknown  N/A    N/A    No      

115.5 查看Ceph部署服务

命令行查看Ceph状态正常。

[root@ceph1 ~]# ceph -s
  cluster:
    id:     36e7a21c-e3f7-11eb-8960-000c299df6ef
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 8s)
    mgr: ceph1.nwbihh(active, since 3d), standbys: ceph2.ednijf
    osd: 9 osds: 9 up (since 3s), 9 in (since 3d)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   48 MiB used, 180 GiB / 180 GiB avail
    pgs:     1 active+clean
 

我们打开dashboard看下,监控而不能显示。

Cephadm全功能安装Ceph Pacific

image-20210717162056574

其实这个原因是使用了https,又没有安全证书,所有不能显示。

Cephadm全功能安装Ceph Pacific

image-20210717162132879

我们只需求通过浏览器直接打开grafana地址,手工点击“接受风险并继续”再回到Ceph Dashboard中再查看就正常了。

Cephadm全功能安装Ceph Pacific

image-20210717162237683

可以看到监控显示正常了。

Cephadm全功能安装Ceph Pacific

image-20210717162315766

可以看到告警模块也自动集成了,这点不会像之前Ceph-deploy安装方式使用那复杂的步骤手工集成了。

Cephadm全功能安装Ceph Pacific

image-20210717162350611

iSCSI还是没有自动集成。

Cephadm全功能安装Ceph Pacific

image-20210717162437984

rbd-mirror服务没有启用。

Cephadm全功能安装Ceph Pacific

image-20210717162509603

NFS也没有集成。

Cephadm全功能安装Ceph Pacific

image-20210717162528974

Cephfs也没有部署。

Cephadm全功能安装Ceph Pacific

image-20210717162545960

RGW也没有集成。

Cephadm全功能安装Ceph Pacific

image-20210717162606723

下面我们逐步一个个部署集成这些服务。

125.6 部署RGW

使用指定数量匹配模式部署。

 #ceph orch apply rgw rgw --placement=3
Cephadm全功能安装Ceph Pacific

333

通过Service查看命令ceph orch ls查看该服务状态。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS        RUNNING  REFRESHED  AGE  PLACEMENT  
alertmanager               ?:9093,9094      1/1  10m ago    3d   count:1    
crash                                       3/3  10m ago    3d   *          
grafana                    ?:3000           1/1  10m ago    3d   count:1    
mgr                                         2/2  10m ago    3d   count:2    
mon                                         3/5  10m ago    3d   count:5    
node-exporter              ?:9100           3/3  10m ago    3d   *          
osd.all-available-devices                  9/12  10m ago    3d   *          
prometheus                 ?:9095           1/1  10m ago    3d   count:1    
rgw.rgw                    ?:80             3/3  1s ago     12s  count:3    

通过Deamon查看命令ceph orch ps查看该进程状态。

[root@ceph1 ~]# ceph orch ps
NAME                  HOST   PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      ConTAINER ID  
alertmanager.ceph1    ceph1  *:9093,9094  running (3d)     30s ago   3d    25.4M        -  0.20.0   0881eb8f169f  32812d14049d  
crash.ceph1           ceph1               running (3d)     30s ago   3d    2675k        -  16.2.5   6933c2a0b7dd  4e28e82a2c92  
crash.ceph2           ceph2               running (3d)     34s ago   3d    12.3M        -  16.2.5   6933c2a0b7dd  45e02925199a  
crash.ceph3           ceph3               running (3d)     34s ago   3d    12.9M        -  16.2.5   6933c2a0b7dd  e98dc6157ba2  
grafana.ceph1         ceph1  *:3000       running (3d)     30s ago   3d    54.0M        -  6.7.4    ae5c36c3d3cd  df3a2a73271c  
mgr.ceph1.nwbihh      ceph1  *:9283       running (3d)     30s ago   3d     428M        -  16.2.5   6933c2a0b7dd  8210247b8cef  
mgr.ceph2.ednijf      ceph2  *:8443,9283  running (3d)     34s ago   3d     399M        -  16.2.5   6933c2a0b7dd  cf556f5d2527  
mon.ceph1             ceph1               running (3d)     30s ago   3d     475M    2048M  16.2.5   6933c2a0b7dd  eef3a01cca72  
mon.ceph2             ceph2               running (3d)     34s ago   3d     293M    2048M  16.2.5   6933c2a0b7dd  4130307d82f2  
mon.ceph3             ceph3               running (3d)     34s ago   3d     261M    2048M  16.2.5   6933c2a0b7dd  fa0cdd7af8a9  
node-exporter.ceph1   ceph1  *:9100       running (3d)     30s ago   3d    14.9M        -  0.18.1   e5a616e4b9cf  3fc396d01969  
node-exporter.ceph2   ceph2  *:9100       running (3d)     34s ago   3d    11.3M        -  0.18.1   e5a616e4b9cf  2b0081864a94  
node-exporter.ceph3   ceph3  *:9100       running (3d)     34s ago   3d    11.2M        -  0.18.1   e5a616e4b9cf  73a7bbe0831c  
osd.0                 ceph2               running (3d)     34s ago   3d    41.2M    4096M  16.2.5   6933c2a0b7dd  2046f2cc358e  
osd.1                 ceph3               running (3d)     34s ago   3d    45.4M    4096M  16.2.5   6933c2a0b7dd  09058507fe6e  
osd.2                 ceph1               running (3d)     30s ago   3d    33.0M    4096M  16.2.5   6933c2a0b7dd  80d58366f0dc  
osd.3                 ceph2               running (3d)     34s ago   3d    42.5M    4096M  16.2.5   6933c2a0b7dd  63654f9d8082  
osd.4                 ceph3               running (3d)     34s ago   3d    43.3M    4096M  16.2.5   6933c2a0b7dd  d3a82429878d  
osd.5                 ceph1               running (3d)     30s ago   3d    31.3M    4096M  16.2.5   6933c2a0b7dd  ebfc2a71bc3c  
osd.6                 ceph2               running (3d)     34s ago   3d    38.8M    4096M  16.2.5   6933c2a0b7dd  235189a4bd54  
osd.7                 ceph3               running (3d)     34s ago   3d    44.3M    4096M  16.2.5   6933c2a0b7dd  b7f8a457a3b1  
osd.8                 ceph1               running (3d)     30s ago   3d    33.2M    4096M  16.2.5   6933c2a0b7dd  eb1bf3e567fd  
prometheus.ceph1      ceph1  *:9095       running (3d)     30s ago   3d    81.5M        -  2.18.1   de242295e225  4da5a8d98259  
rgw.rgw.ceph1.lgcvfw  ceph1  *:80         running (43s)    30s ago  42s    48.0M        -  16.2.5   6933c2a0b7dd  20fb488d35ad  
rgw.rgw.ceph2.eqykjt  ceph2  *:80         running (40s)    34s ago  40s    40.1M        -  16.2.5   6933c2a0b7dd  e9c9538b064e  
rgw.rgw.ceph3.eybvwe  ceph3  *:80         running (37s)    34s ago  37s    41.8M        -  16.2.5   6933c2a0b7dd  75e93ef56bb7  

集成到dashboard

#radosgw-admin user create --uid=rgw --display-name=rgw --system
"keys": [
        {
            "user": "rgw",
            "access_key": "M0XRR80H4AGGE4PP0A5B",
            "secret_key": "Tbln48sfIceDGNill5muCrX0oMCHrQcl2oC9OURe"
        }
    ],
            ......

记录access_key和secret_key的值保存为 access_key.txt 和secret_key.txt ,通过命令集成到dahsboard中。

#ceph dashboard set-rgw-api-access-key -i access_key.txt 
Option RGW_API_ACCESS_KEY updated
#ceph dashboard set-rgw-api-secret-key -i secret_key.txt 
Option RGW_API_SECRET_KEY updated

在Ceph Dashboard中看到RGW已经集成成功(看官方文档未来Cephadm安装会自动集成到Ceph Dashboard中,不像现在还需要手动集成)。

Cephadm全功能安装Ceph Pacific

image-20210717164724797

Cephadm全功能安装Ceph Pacific

image-20210717164738030

Cephadm全功能安装Ceph Pacific

image-20210717164753742

135.7 部署Cephfs

部署cephfs服务并创建cepfs,创建cephfs有两种方式,一种是使用的是ceph fs命令该命令会自动创建相应的池,另一种手工创建池并创建Service,下面方法任选一种。

#ceph fs volume create cephfs --placement=3
#ceph osd pool create cephfs_data 32
#ceph osd pool create cephfs_metadata 32
#ceph fs new cephfs cephfs_metadata cephfs_data
#ceph orch apply mds cephfs --placement=3
Cephadm全功能安装Ceph Pacific

444

查看Service状态。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS        RUNNING  REFRESHED  AGE  PLACEMENT  
alertmanager               ?:9093,9094      1/1  1s ago     3d   count:1    
crash                                       3/3  5s ago     3d   *          
grafana                    ?:3000           1/1  1s ago     3d   count:1    
mds.cephfs                                  3/3  5s ago     15s  count:3    
mgr                                         2/2  5s ago     3d   count:2    
mon                                         3/5  5s ago     3d   count:5    
node-exporter              ?:9100           3/3  5s ago     3d   *          
osd.all-available-devices                  9/12  5s ago     3d   *          
prometheus                 ?:9095           1/1  1s ago     3d   count:1    
rgw.rgw                    ?:80             3/3  5s ago     21m  count:3  

查看Deamon状态。

[root@ceph1 ~]# ceph orch ps
NAME                     HOST   PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      ConTAINER ID  
alertmanager.ceph1       ceph1  *:9093,9094  running (7m)     32s ago   3d    31.9M        -  0.20.0   0881eb8f169f  66918f633189  
crash.ceph1              ceph1               running (7m)     32s ago   3d    6287k        -  16.2.5   6933c2a0b7dd  dbe00b18ef37  
crash.ceph2              ceph2               running (7m)     36s ago   3d    7889k        -  16.2.5   6933c2a0b7dd  c24dcd762121  
crash.ceph3              ceph3               running (7m)     36s ago   3d    10.1M        -  16.2.5   6933c2a0b7dd  eed1c74352f0  
grafana.ceph1            ceph1  *:3000       running (7m)     32s ago   3d    70.5M        -  6.7.4    ae5c36c3d3cd  cbbfae0f90e4  
mds.cephfs.ceph1.ooraiz  ceph1               running (44s)    32s ago  43s    24.0M        -  16.2.5   6933c2a0b7dd  f668ab1ad9bb  
mds.cephfs.ceph2.qfmprj  ceph2               running (41s)    36s ago  41s    19.5M        -  16.2.5   6933c2a0b7dd  d8bb9979aca2  
mds.cephfs.ceph3.ifskba  ceph3               running (39s)    36s ago  39s    19.8M        -  16.2.5   6933c2a0b7dd  d0fd9a78c1d8  
mgr.ceph1.nwbihh         ceph1  *:9283       running (7m)     32s ago   3d     480M        -  16.2.5   6933c2a0b7dd  90bd2d0704b3  
mgr.ceph2.ednijf         ceph2  *:8443,9283  running (7m)     36s ago   3d     468M        -  16.2.5   6933c2a0b7dd  9b69c446355f  
mon.ceph1                ceph1               running (7m)     32s ago   3d     139M    2048M  16.2.5   6933c2a0b7dd  736fd1630a06  
mon.ceph2                ceph2               running (7m)     36s ago   3d     108M    2048M  16.2.5   6933c2a0b7dd  629bc53c8992  
mon.ceph3                ceph3               running (7m)     36s ago   3d     125M    2048M  16.2.5   6933c2a0b7dd  1b56b8a0d9ac  
node-exporter.ceph1      ceph1  *:9100       running (7m)     32s ago   3d    23.3M        -  0.18.1   e5a616e4b9cf  3664470b1641  
node-exporter.ceph2      ceph2  *:9100       running (7m)     36s ago   3d    24.9M        -  0.18.1   e5a616e4b9cf  140f7e84ba38  
node-exporter.ceph3      ceph3  *:9100       running (7m)     36s ago   3d    24.8M        -  0.18.1   e5a616e4b9cf  de98304ceaea  
osd.0                    ceph2               running (7m)     36s ago   3d    59.2M    4096M  16.2.5   6933c2a0b7dd  6ebce33417c2  
osd.1                    ceph3               running (7m)     36s ago   3d    82.7M    4096M  16.2.5   6933c2a0b7dd  6d43f1b7bfde  
osd.2                    ceph1               running (7m)     32s ago   3d    56.0M    4096M  16.2.5   6933c2a0b7dd  07d1183306d1  
osd.3                    ceph2               running (7m)     36s ago   3d    67.5M    4096M  16.2.5   6933c2a0b7dd  a77d771d3e6d  
osd.4                    ceph3               running (7m)     36s ago   3d    48.7M    4096M  16.2.5   6933c2a0b7dd  3d82752a8fb1  
osd.5                    ceph1               running (7m)     32s ago   3d    61.6M    4096M  16.2.5   6933c2a0b7dd  6f7b5df090d5  
osd.6                    ceph2               running (7m)     36s ago   3d    59.2M    4096M  16.2.5   6933c2a0b7dd  7f769655adc7  
osd.7                    ceph3               running (7m)     36s ago   3d    51.9M    4096M  16.2.5   6933c2a0b7dd  24d63cd18dde  
osd.8                    ceph1               running (7m)     32s ago   3d    44.8M    4096M  16.2.5   6933c2a0b7dd  667fb8831e53  
prometheus.ceph1         ceph1  *:9095       running (7m)     32s ago   3d    97.0M        -  2.18.1   de242295e225  f67fbc035cba  
rgw.rgw.ceph1.lgcvfw     ceph1  *:80         running (7m)     32s ago  22m    71.1M        -  16.2.5   6933c2a0b7dd  c8e1c9701010  
rgw.rgw.ceph2.eqykjt     ceph2  *:80         running (7m)     36s ago  22m    82.9M        -  16.2.5   6933c2a0b7dd  d8ba326c22b8  
rgw.rgw.ceph3.eybvwe     ceph3  *:80         running (7m)     36s ago  22m    81.4M        -  16.2.5   6933c2a0b7dd  d89c2b87e8e4  

查看Ceph Dashboard中状态。

Cephadm全功能安装Ceph Pacific

image-20210717165017471

145.8 部署NFS

先创建nfs所需求的池。

#ceph osd pool create ganesha_data 32
#ceph osd pool application enable ganesha_data nfs
Cephadm全功能安装Ceph Pacific

555

部署nfs Service。

 #ceph orch apply nfs nfs ganesha_data --placement=3

查看Service状态。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS        RUNNING  REFRESHED  AGE  PLACEMENT  
alertmanager               ?:9093,9094      1/1  3s ago     3d   count:1    
crash                                       3/3  8s ago     3d   *          
grafana                    ?:3000           1/1  3s ago     3d   count:1    
mds.cephfs                                  3/3  8s ago     3m   count:3    
mgr                                         2/2  8s ago     3d   count:2    
mon                                         3/5  8s ago     3d   count:5    
nfs.nfs                                     3/3  8s ago     28s  count:3    
node-exporter              ?:9100           3/3  8s ago     3d   *          
osd.all-available-devices                  9/12  8s ago     3d   *          
prometheus                 ?:9095           1/1  3s ago     3d   count:1    
rgw.rgw                    ?:80             3/3  8s ago     25m  count:3    

查看Deamon状态。

[root@ceph1 ~]# ceph orch ps
NAME                      HOST   PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      ConTAINER ID  
alertmanager.ceph1        ceph1  *:9093,9094  running (11m)    27s ago   3d    28.9M        -  0.20.0   0881eb8f169f  66918f633189  
crash.ceph1               ceph1               running (11m)    27s ago   3d    5628k        -  16.2.5   6933c2a0b7dd  dbe00b18ef37  
crash.ceph2               ceph2               running (10m)    33s ago   3d    7889k        -  16.2.5   6933c2a0b7dd  c24dcd762121  
crash.ceph3               ceph3               running (10m)    33s ago   3d    10.1M        -  16.2.5   6933c2a0b7dd  eed1c74352f0  
grafana.ceph1             ceph1  *:3000       running (11m)    27s ago   3d    70.6M        -  6.7.4    ae5c36c3d3cd  cbbfae0f90e4  
mds.cephfs.ceph1.ooraiz   ceph1               running (3m)     27s ago   3m    26.6M        -  16.2.5   6933c2a0b7dd  f668ab1ad9bb  
mds.cephfs.ceph2.qfmprj   ceph2               running (3m)     33s ago   3m    21.4M        -  16.2.5   6933c2a0b7dd  d8bb9979aca2  
mds.cephfs.ceph3.ifskba   ceph3               running (3m)     33s ago   3m    21.7M        -  16.2.5   6933c2a0b7dd  d0fd9a78c1d8  
mgr.ceph1.nwbihh          ceph1  *:9283       running (11m)    27s ago   3d     489M        -  16.2.5   6933c2a0b7dd  90bd2d0704b3  
mgr.ceph2.ednijf          ceph2  *:8443,9283  running (10m)    33s ago   3d     468M        -  16.2.5   6933c2a0b7dd  9b69c446355f  
mon.ceph1                 ceph1               running (11m)    27s ago   3d     126M    2048M  16.2.5   6933c2a0b7dd  736fd1630a06  
mon.ceph2                 ceph2               running (10m)    33s ago   3d     119M    2048M  16.2.5   6933c2a0b7dd  629bc53c8992  
mon.ceph3                 ceph3               running (10m)    33s ago   3d     135M    2048M  16.2.5   6933c2a0b7dd  1b56b8a0d9ac  
nfs.nfs.0.0.ceph1.ufsfpf  ceph1  *:2049       running (47s)    27s ago  47s    50.5M        -  3.5      6933c2a0b7dd  8af0f7d10e2b  
nfs.nfs.1.0.ceph2.blgraz  ceph2  *:2049       running (42s)    33s ago  41s    27.4M        -  3.5      6933c2a0b7dd  d2e2238859e8  
nfs.nfs.2.0.ceph3.edjscz  ceph3  *:2049       running (36s)    33s ago  36s    27.2M        -  3.5      6933c2a0b7dd  31e148631f9f  
node-exporter.ceph1       ceph1  *:9100       running (11m)    27s ago   3d    22.6M        -  0.18.1   e5a616e4b9cf  3664470b1641  
node-exporter.ceph2       ceph2  *:9100       running (11m)    33s ago   3d    24.9M        -  0.18.1   e5a616e4b9cf  140f7e84ba38  
node-exporter.ceph3       ceph3  *:9100       running (10m)    33s ago   3d    25.0M        -  0.18.1   e5a616e4b9cf  de98304ceaea  
osd.0                     ceph2               running (10m)    33s ago   3d    62.4M    4096M  16.2.5   6933c2a0b7dd  6ebce33417c2  
osd.1                     ceph3               running (10m)    33s ago   3d    85.9M    4096M  16.2.5   6933c2a0b7dd  6d43f1b7bfde  
osd.2                     ceph1               running (10m)    27s ago   3d    52.4M    4096M  16.2.5   6933c2a0b7dd  07d1183306d1  
osd.3                     ceph2               running (10m)    33s ago   3d    71.3M    4096M  16.2.5   6933c2a0b7dd  a77d771d3e6d  
osd.4                     ceph3               running (10m)    33s ago   3d    52.1M    4096M  16.2.5   6933c2a0b7dd  3d82752a8fb1  
osd.5                     ceph1               running (10m)    27s ago   3d    56.3M    4096M  16.2.5   6933c2a0b7dd  6f7b5df090d5  
osd.6                     ceph2               running (10m)    33s ago   3d    61.5M    4096M  16.2.5   6933c2a0b7dd  7f769655adc7  
osd.7                     ceph3               running (10m)    33s ago   3d    54.7M    4096M  16.2.5   6933c2a0b7dd  24d63cd18dde  
osd.8                     ceph1               running (10m)    27s ago   3d    46.7M    4096M  16.2.5   6933c2a0b7dd  667fb8831e53  
prometheus.ceph1          ceph1  *:9095       running (11m)    27s ago   3d    97.2M        -  2.18.1   de242295e225  f67fbc035cba  
rgw.rgw.ceph1.lgcvfw      ceph1  *:80         running (11m)    27s ago  25m    62.2M        -  16.2.5   6933c2a0b7dd  c8e1c9701010  
rgw.rgw.ceph2.eqykjt      ceph2  *:80         running (11m)    33s ago  25m    83.3M        -  16.2.5   6933c2a0b7dd  d8ba326c22b8  
rgw.rgw.ceph3.eybvwe      ceph3  *:80         running (10m)    33s ago  25m    81.9M        -  16.2.5   6933c2a0b7dd  d89c2b87e8e4  

查看Ceph Dashboard状态。

Cephadm全功能安装Ceph Pacific

image-20210717165333310

155.9 部署iSCSi

创建iscsi所需求的池。

#ceph osd pool create  iscsi_pool 32 32
#ceph osd pool application enable iscsi_pool iscsi

前面几个我们都是通过命令指定放置规范,由于iscsi配置的参数有点多,所以部署iscsi我们换YAM方式(当初iscsi也支持命令指定方式)。

#vi iscsi.yaml
service_type: iscsi
service_id: gw
placement:
  hosts:
    - ceph1
    - ceph2
    - ceph3
spec:
  pool: iscsi_pool
  trusted_ip_list: "192.168.149.128,192.168.149.145,192.168.149.146"
  api_user: admin
  api_password: admin
  api_secure: false

通过apply命令部署,看到apply熟悉kuernetes就很明白他的意思了,cephadm也是声明式的,所以如果想修改配置参数只需要直接修改YAML文件,再apply就可以了。

[root@ceph1 ~]# ceph orch apply -i iscsi.yaml
Scheduled iscsi.gw update...
Cephadm全功能安装Ceph Pacific

666

查看Service状态。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS        RUNNING  REFRESHED  AGE  PLACEMENT          
alertmanager               ?:9093,9094      1/1  5s ago     3d   count:1            
crash                                       3/3  9s ago     3d   *                  
grafana                    ?:3000           1/1  5s ago     3d   count:1            
iscsi.gw                                    3/3  9s ago     21s  ceph1;ceph2;ceph3  
mds.cephfs                                  3/3  9s ago     8m   count:3            
mgr                                         2/2  9s ago     3d   count:2            
mon                                         3/5  9s ago     3d   count:5            
nfs.nfs                                     3/3  9s ago     5m   count:3            
node-exporter              ?:9100           3/3  9s ago     3d   *                  
osd.all-available-devices                  9/12  9s ago     3d   *                  
prometheus                 ?:9095           1/1  5s ago     3d   count:1            
rgw.rgw                    ?:80             3/3  9s ago     30m  count:3   

查看Deamon状态。

[root@ceph1 ~]# ceph orch ps
NAME                      HOST   PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      ConTAINER ID  
alertmanager.ceph1        ceph1  *:9093,9094  running (15m)    28s ago   3d    23.6M        -  0.20.0   0881eb8f169f  66918f633189  
crash.ceph1               ceph1               running (15m)    28s ago   3d    4072k        -  16.2.5   6933c2a0b7dd  dbe00b18ef37  
crash.ceph2               ceph2               running (15m)    32s ago   3d    7470k        -  16.2.5   6933c2a0b7dd  c24dcd762121  
crash.ceph3               ceph3               running (15m)    32s ago   3d    10.1M        -  16.2.5   6933c2a0b7dd  eed1c74352f0  
grafana.ceph1             ceph1  *:3000       running (15m)    28s ago   3d    47.3M        -  6.7.4    ae5c36c3d3cd  cbbfae0f90e4  
iscsi.gw.ceph1.esngze     ceph1               running (42s)    28s ago  41s    64.2M        -  3.5      6933c2a0b7dd  96f1250ba7f1  
iscsi.gw.ceph2.ypjkrx     ceph2               running (39s)    32s ago  39s    61.9M        -  3.5      6933c2a0b7dd  9cc090aa85ec  
iscsi.gw.ceph3.hntxjs     ceph3               running (36s)    32s ago  36s    28.2M        -  3.5      6933c2a0b7dd  d9a6906671a4  
mds.cephfs.ceph1.ooraiz   ceph1               running (8m)     28s ago   8m    22.1M        -  16.2.5   6933c2a0b7dd  f668ab1ad9bb  
mds.cephfs.ceph2.qfmprj   ceph2               running (8m)     32s ago   8m    20.3M        -  16.2.5   6933c2a0b7dd  d8bb9979aca2  
mds.cephfs.ceph3.ifskba   ceph3               running (8m)     32s ago   8m    22.9M        -  16.2.5   6933c2a0b7dd  d0fd9a78c1d8  
mgr.ceph1.nwbihh          ceph1  *:9283       running (15m)    28s ago   3d     465M        -  16.2.5   6933c2a0b7dd  90bd2d0704b3  
mgr.ceph2.ednijf          ceph2  *:8443,9283  running (15m)    32s ago   3d     434M        -  16.2.5   6933c2a0b7dd  9b69c446355f  
mon.ceph1                 ceph1               running (15m)    28s ago   3d     128M    2048M  16.2.5   6933c2a0b7dd  736fd1630a06  
mon.ceph2                 ceph2               running (15m)    32s ago   3d     115M    2048M  16.2.5   6933c2a0b7dd  629bc53c8992  
mon.ceph3                 ceph3               running (15m)    32s ago   3d     161M    2048M  16.2.5   6933c2a0b7dd  1b56b8a0d9ac  
nfs.nfs.0.0.ceph1.ufsfpf  ceph1  *:2049       running (5m)     28s ago   5m    70.3M        -  3.5      6933c2a0b7dd  8af0f7d10e2b  
nfs.nfs.1.0.ceph2.blgraz  ceph2  *:2049       running (5m)     32s ago   5m    77.7M        -  3.5      6933c2a0b7dd  d2e2238859e8  
nfs.nfs.2.0.ceph3.edjscz  ceph3  *:2049       running (5m)     32s ago   5m    80.7M        -  3.5      6933c2a0b7dd  31e148631f9f  
node-exporter.ceph1       ceph1  *:9100       running (15m)    28s ago   3d    16.0M        -  0.18.1   e5a616e4b9cf  3664470b1641  
node-exporter.ceph2       ceph2  *:9100       running (15m)    32s ago   3d    23.5M        -  0.18.1   e5a616e4b9cf  140f7e84ba38  
node-exporter.ceph3       ceph3  *:9100       running (15m)    32s ago   3d    27.5M        -  0.18.1   e5a616e4b9cf  de98304ceaea  
osd.0                     ceph2               running (15m)    32s ago   3d    60.7M    4096M  16.2.5   6933c2a0b7dd  6ebce33417c2  
osd.1                     ceph3               running (15m)    32s ago   3d    90.3M    4096M  16.2.5   6933c2a0b7dd  6d43f1b7bfde  
osd.2                     ceph1               running (15m)    28s ago   3d    49.1M    4096M  16.2.5   6933c2a0b7dd  07d1183306d1  
osd.3                     ceph2               running (15m)    32s ago   3d    71.8M    4096M  16.2.5   6933c2a0b7dd  a77d771d3e6d  
osd.4                     ceph3               running (15m)    32s ago   3d    55.8M    4096M  16.2.5   6933c2a0b7dd  3d82752a8fb1  
osd.5                     ceph1               running (15m)    28s ago   3d    51.5M    4096M  16.2.5   6933c2a0b7dd  6f7b5df090d5  
osd.6                     ceph2               running (15m)    32s ago   3d    62.1M    4096M  16.2.5   6933c2a0b7dd  7f769655adc7  
osd.7                     ceph3               running (15m)    32s ago   3d    57.4M    4096M  16.2.5   6933c2a0b7dd  24d63cd18dde  
osd.8                     ceph1               running (15m)    28s ago   3d    49.2M    4096M  16.2.5   6933c2a0b7dd  667fb8831e53  
prometheus.ceph1          ceph1  *:9095       running (15m)    28s ago   3d    80.4M        -  2.18.1   de242295e225  f67fbc035cba  
rgw.rgw.ceph1.lgcvfw      ceph1  *:80         running (15m)    28s ago  30m    55.8M        -  16.2.5   6933c2a0b7dd  c8e1c9701010  
rgw.rgw.ceph2.eqykjt      ceph2  *:80         running (15m)    32s ago  30m    74.9M        -  16.2.5   6933c2a0b7dd  d8ba326c22b8  
rgw.rgw.ceph3.eybvwe      ceph3  *:80         running (15m)    32s ago  30m    83.8M        -  16.2.5   6933c2a0b7dd  d89c2b87e8e4 

查看Ceph Dashboard状态。

Cephadm全功能安装Ceph Pacific

image-20210717165822644

Cephadm全功能安装Ceph Pacific

image-20210717165848575

165.10 添加rbd-mirror

部署rbd-mirror Service.

[root@ceph-node1 ~]# ceph orch apply rbd-mirror --placement=3
Scheduled rbd-mirror update...
Cephadm全功能安装Ceph Pacific

777

查看Service状态。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS        RUNNING  REFRESHED  AGE  PLACEMENT          
alertmanager               ?:9093,9094      1/1  9s ago     3d   count:1            
crash                                       3/3  12s ago    3d   *                  
grafana                    ?:3000           1/1  9s ago     3d   count:1            
iscsi.gw                                    3/3  12s ago    3m   ceph1;ceph2;ceph3  
mds.cephfs                                  3/3  12s ago    11m  count:3            
mgr                                         2/2  12s ago    3d   count:2            
mon                                         3/5  12s ago    3d   count:5            
nfs.nfs                                     3/3  12s ago    8m   count:3            
node-exporter              ?:9100           3/3  12s ago    3d   *                  
osd.all-available-devices                  9/12  12s ago    3d   *                  
prometheus                 ?:9095           1/1  9s ago     3d   count:1            
rbd-mirror                                  3/3  12s ago    23s  count:3            
rgw.rgw                    ?:80             3/3  12s ago    32m  count:3 

查看Deamon状态。

[root@ceph1 ~]# ceph orch ps
NAME                      HOST   PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      ConTAINER ID  
alertmanager.ceph1        ceph1  *:9093,9094  running (18m)    34s ago   3d    24.5M        -  0.20.0   0881eb8f169f  66918f633189  
crash.ceph1               ceph1               running (18m)    34s ago   3d    3309k        -  16.2.5   6933c2a0b7dd  dbe00b18ef37  
crash.ceph2               ceph2               running (18m)    37s ago   3d    11.6M        -  16.2.5   6933c2a0b7dd  c24dcd762121  
crash.ceph3               ceph3               running (18m)    37s ago   3d    10.1M        -  16.2.5   6933c2a0b7dd  eed1c74352f0  
grafana.ceph1             ceph1  *:3000       running (18m)    34s ago   3d    60.2M        -  6.7.4    ae5c36c3d3cd  cbbfae0f90e4  
iscsi.gw.ceph1.esngze     ceph1               running (3m)     34s ago   3m    53.1M        -  3.5      6933c2a0b7dd  96f1250ba7f1  
iscsi.gw.ceph2.ypjkrx     ceph2               running (3m)     37s ago   3m    66.1M        -  3.5      6933c2a0b7dd  9cc090aa85ec  
iscsi.gw.ceph3.hntxjs     ceph3               running (3m)     37s ago   3m    59.7M        -  3.5      6933c2a0b7dd  d9a6906671a4  
mds.cephfs.ceph1.ooraiz   ceph1               running (11m)    34s ago  11m    18.4M        -  16.2.5   6933c2a0b7dd  f668ab1ad9bb  
mds.cephfs.ceph2.qfmprj   ceph2               running (11m)    37s ago  11m    20.1M        -  16.2.5   6933c2a0b7dd  d8bb9979aca2  
mds.cephfs.ceph3.ifskba   ceph3               running (11m)    37s ago  11m    23.4M        -  16.2.5   6933c2a0b7dd  d0fd9a78c1d8  
mgr.ceph1.nwbihh          ceph1  *:9283       running (18m)    34s ago   3d     456M        -  16.2.5   6933c2a0b7dd  90bd2d0704b3  
mgr.ceph2.ednijf          ceph2  *:8443,9283  running (18m)    37s ago   3d     423M        -  16.2.5   6933c2a0b7dd  9b69c446355f  
mon.ceph1                 ceph1               running (18m)    34s ago   3d     129M    2048M  16.2.5   6933c2a0b7dd  736fd1630a06  
mon.ceph2                 ceph2               running (18m)    37s ago   3d     133M    2048M  16.2.5   6933c2a0b7dd  629bc53c8992  
mon.ceph3                 ceph3               running (18m)    37s ago   3d     158M    2048M  16.2.5   6933c2a0b7dd  1b56b8a0d9ac  
nfs.nfs.0.0.ceph1.ufsfpf  ceph1  *:2049       running (8m)     34s ago   8m    71.9M        -  3.5      6933c2a0b7dd  8af0f7d10e2b  
nfs.nfs.1.0.ceph2.blgraz  ceph2  *:2049       running (8m)     37s ago   8m    86.8M        -  3.5      6933c2a0b7dd  d2e2238859e8  
nfs.nfs.2.0.ceph3.edjscz  ceph3  *:2049       running (8m)     37s ago   8m    80.9M        -  3.5      6933c2a0b7dd  31e148631f9f  
node-exporter.ceph1       ceph1  *:9100       running (18m)    34s ago   3d    22.2M        -  0.18.1   e5a616e4b9cf  3664470b1641  
node-exporter.ceph2       ceph2  *:9100       running (18m)    37s ago   3d    22.0M        -  0.18.1   e5a616e4b9cf  140f7e84ba38  
node-exporter.ceph3       ceph3  *:9100       running (18m)    37s ago   3d    27.6M        -  0.18.1   e5a616e4b9cf  de98304ceaea  
osd.0                     ceph2               running (18m)    37s ago   3d    58.4M    4096M  16.2.5   6933c2a0b7dd  6ebce33417c2  
osd.1                     ceph3               running (18m)    37s ago   3d    91.3M    4096M  16.2.5   6933c2a0b7dd  6d43f1b7bfde  
osd.2                     ceph1               running (18m)    34s ago   3d    42.7M    4096M  16.2.5   6933c2a0b7dd  07d1183306d1  
osd.3                     ceph2               running (18m)    37s ago   3d    67.0M    4096M  16.2.5   6933c2a0b7dd  a77d771d3e6d  
osd.4                     ceph3               running (18m)    37s ago   3d    56.9M    4096M  16.2.5   6933c2a0b7dd  3d82752a8fb1  
osd.5                     ceph1               running (18m)    34s ago   3d    58.5M    4096M  16.2.5   6933c2a0b7dd  6f7b5df090d5  
osd.6                     ceph2               running (18m)    37s ago   3d    60.2M    4096M  16.2.5   6933c2a0b7dd  7f769655adc7  
osd.7                     ceph3               running (18m)    37s ago   3d    58.0M    4096M  16.2.5   6933c2a0b7dd  24d63cd18dde  
osd.8                     ceph1               running (18m)    34s ago   3d    47.8M    4096M  16.2.5   6933c2a0b7dd  667fb8831e53  
prometheus.ceph1          ceph1  *:9095       running (18m)    34s ago   3d    95.1M        -  2.18.1   de242295e225  f67fbc035cba  
rbd-mirror.ceph1.rkchvq   ceph1               running (46s)    34s ago  46s    32.6M        -  16.2.5   6933c2a0b7dd  46c56c1528f0  
rbd-mirror.ceph2.zdhnvt   ceph2               running (44s)    37s ago  44s    33.7M        -  16.2.5   6933c2a0b7dd  de9df26682c7  
rbd-mirror.ceph3.mssyuu   ceph3               running (42s)    37s ago  41s    29.9M        -  16.2.5   6933c2a0b7dd  679eabd8dd5c  
rgw.rgw.ceph1.lgcvfw      ceph1  *:80         running (18m)    34s ago  33m    50.1M        -  16.2.5   6933c2a0b7dd  c8e1c9701010  
rgw.rgw.ceph2.eqykjt      ceph2  *:80         running (18m)    37s ago  33m    73.9M        -  16.2.5   6933c2a0b7dd  d8ba326c22b8  
rgw.rgw.ceph3.eybvwe      ceph3  *:80         running (18m)    37s ago  33m    84.4M        -  16.2.5   6933c2a0b7dd  d89c2b87e8e4  

查看Ceph Dashboard状态。

Cephadm全功能安装Ceph Pacific

image-20210717170114782

175.11 部署Cephfs-mirror

部署cephfs-mirror Service.

#ceph orch apply cephfs-mirror --placement=3
Scheduled cephfs-mirror update...
Cephadm全功能安装Ceph Pacific

888

查看Service状态。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS                      RUNNING  REFRESHED  AGE  PLACEMENT          
alertmanager               ?:9093,9094                    1/1  22s ago    4d   count:1            
cephfs-mirror                                             3/3  25s ago    52s  count:3            
crash                                                     3/3  25s ago    4d   *                  
grafana                    ?:3000                         1/1  22s ago    4d   count:1            
ingress.nfs.nfs            192.168.149.201:2050,1968      6/6  25s ago    7h   count:3            
ingress.rgw.rgw            192.168.149.200:8080,1967      6/6  25s ago    7h   count:3            
iscsi.gw                                                  3/3  25s ago    8h   ceph1;ceph2;ceph3  
mds.cephfs                                                3/3  25s ago    8h   count:3            
mgr                                                       2/2  25s ago    4d   count:2            
mon                                                       3/5  25s ago    4d   count:5            
nfs.nfs                                                   3/3  25s ago    8h   count:3            
node-exporter              ?:9100                         3/3  25s ago    4d   *                  
osd.all-available-devices                                9/12  25s ago    3d   *                  
prometheus                 ?:9095                         1/1  22s ago    4d   count:1            
rbd-mirror                                                3/3  25s ago    8h   count:3            
rgw.rgw                    ?:80                           3/3  25s ago    8h   count:3 

查看Deamon状态。

[root@ceph1 ~]# ceph orch ps
NAME                             HOST   PORTS        STATUS          REFRESHED   AGE  MEM USE  MEM LIM  VERSION         IMAGE ID      ConTAINER ID  
alertmanager.ceph1               ceph1  *:9093,9094  running (7h)      61s ago    4d    29.5M        -  0.20.0          0881eb8f169f  b9640288c91f  
cephfs-mirror.ceph1.cclxku       ceph1               running (86s)     61s ago   85s    26.1M        -  16.2.5          6933c2a0b7dd  88b6f8918aa8  
cephfs-mirror.ceph2.ozwgqg       ceph2               running (116s)    64s ago  116s    30.8M        -  16.2.5          6933c2a0b7dd  dcd8c6dcc4f0  
cephfs-mirror.ceph3.iefmko       ceph3               running (2m)      64s ago    2m    30.5M        -  16.2.5          6933c2a0b7dd  581ba61bad28  
crash.ceph1                      ceph1               running (7h)      61s ago    4d    1975k        -  16.2.5          6933c2a0b7dd  116ff5ce3646  
crash.ceph2                      ceph2               running (7h)      64s ago    3d    5448k        -  16.2.5          6933c2a0b7dd  354b8903892b  
crash.ceph3                      ceph3               running (7h)      64s ago    3d    5737k        -  16.2.5          6933c2a0b7dd  a5c223e5362c  
grafana.ceph1                    ceph1  *:3000       running (7h)      61s ago    4d    50.8M        -  6.7.4           ae5c36c3d3cd  d284162e0da4  
haproxy.nfs.nfs.ceph1.bdmrwh     ceph1  *:2050,1968  running (7h)      61s ago    7h    1899k        -  2.3.12-b99e499  b2284eda2221  3d9c640d828e  
haproxy.nfs.nfs.ceph2.ryaaaq     ceph2  *:2050,1968  running (7h)      64s ago    7h    1879k        -  2.3.12-b99e499  b2284eda2221  cffaa94c6c4a  
haproxy.nfs.nfs.ceph3.ysfamh     ceph3  *:2050,1968  running (7h)      64s ago    7h    1644k        -  2.3.12-b99e499  b2284eda2221  0e48492c6233  
haproxy.rgw.rgw.ceph1.nmvtig     ceph1  *:8080,1967  running (7h)      61s ago    7h    7574k        -  2.3.12-b99e499  b2284eda2221  5e521b875eba  
haproxy.rgw.rgw.ceph2.bbcnso     ceph2  *:8080,1967  running (7h)      64s ago    7h    4512k        -  2.3.12-b99e499  b2284eda2221  cbd03c96a1fd  
haproxy.rgw.rgw.ceph3.bcvvxl     ceph3  *:8080,1967  running (7h)      64s ago    7h    5775k        -  2.3.12-b99e499  b2284eda2221  535a37749437  
iscsi.gw.ceph1.esngze            ceph1               running (7h)      61s ago    8h    23.8M        -  3.5             6933c2a0b7dd  04d1bfd141a9  
iscsi.gw.ceph2.ypjkrx            ceph2               running (7h)      64s ago    8h    21.2M        -  3.5             6933c2a0b7dd  7f78b9c0deb0  
iscsi.gw.ceph3.hntxjs            ceph3               running (7h)      64s ago    8h    28.5M        -  3.5             6933c2a0b7dd  2fe9bb0b7bfe  
keepalived.nfs.nfs.ceph1.jprcvw  ceph1               running (7h)      61s ago    7h    3862k        -  2.0.5           073e0c3cd1b9  8e337e634255  
keepalived.nfs.nfs.ceph2.oiynik  ceph2               running (7h)      64s ago    7h    3493k        -  2.0.5           073e0c3cd1b9  a6d10588190e  
keepalived.nfs.nfs.ceph3.guulfc  ceph3               running (7h)      64s ago    7h    1853k        -  2.0.5           073e0c3cd1b9  23680bf04f55  
keepalived.rgw.rgw.ceph1.keriqf  ceph1               running (7h)      61s ago    7h    2902k        -  2.0.5           073e0c3cd1b9  a11d221ae1af  
keepalived.rgw.rgw.ceph2.gxshhg  ceph2               running (7h)      64s ago    7h    2461k        -  2.0.5           073e0c3cd1b9  59026981ef10  
keepalived.rgw.rgw.ceph3.tqaixq  ceph3               running (7h)      64s ago    7h    4294k        -  2.0.5           073e0c3cd1b9  25cfdab23bfe  
mds.cephfs.ceph1.ooraiz          ceph1               running (7h)      61s ago    8h    14.6M        -  16.2.5          6933c2a0b7dd  64246a825e21  
mds.cephfs.ceph2.qfmprj          ceph2               running (7h)      64s ago    8h    9323k        -  16.2.5          6933c2a0b7dd  f0faa5b9a7d5  
mds.cephfs.ceph3.ifskba          ceph3               running (7h)      64s ago    8h    9072k        -  16.2.5          6933c2a0b7dd  7f294af518f7  
mgr.ceph1.nwbihh                 ceph1  *:9283       running (7h)      61s ago    4d     161M        -  16.2.5          6933c2a0b7dd  f7b2470c5797  
mgr.ceph2.ednijf                 ceph2  *:8443,9283  running (7h)      64s ago    3d    28.2M        -  16.2.5          6933c2a0b7dd  e24b40d99e6d  
mon.ceph1                        ceph1               running (7h)      61s ago    4d     483M    2048M  16.2.5          6933c2a0b7dd  9936a8b6587b  
mon.ceph2                        ceph2               running (7h)      64s ago    3d     878M    2048M  16.2.5          6933c2a0b7dd  abe7eed2d100  
mon.ceph3                        ceph3               running (7h)      64s ago    3d     881M    2048M  16.2.5          6933c2a0b7dd  52d747e3d011  
nfs.nfs.0.0.ceph1.ufsfpf         ceph1  *:2049       running (7h)      61s ago    8h    25.4M        -  3.5             6933c2a0b7dd  fff5327b8415  
nfs.nfs.1.0.ceph2.blgraz         ceph2  *:2049       running (7h)      64s ago    8h    33.0M        -  3.5             6933c2a0b7dd  ea09e2429950  
nfs.nfs.2.0.ceph3.edjscz         ceph3  *:2049       running (7h)      64s ago    8h    41.8M        -  3.5             6933c2a0b7dd  dcc6fd5aa85f  
node-exporter.ceph1              ceph1  *:9100       running (7h)      61s ago    4d    13.8M        -  0.18.1          e5a616e4b9cf  69e5d1f3e310  
node-exporter.ceph2              ceph2  *:9100       running (7h)      64s ago    3d    15.8M        -  0.18.1          e5a616e4b9cf  cd41893212ca  
node-exporter.ceph3              ceph3  *:9100       running (7h)      64s ago    3d    15.3M        -  0.18.1          e5a616e4b9cf  9ab2b99dabd6  
osd.0                            ceph2               running (7h)      64s ago    3d    31.6M    4096M  16.2.5          6933c2a0b7dd  0c9dc80a1d74  
osd.1                            ceph3               running (7h)      64s ago    3d    47.2M    4096M  16.2.5          6933c2a0b7dd  c698fa4c50c0  
osd.2                            ceph1               running (7h)      61s ago    3d    39.3M    4096M  16.2.5          6933c2a0b7dd  69501861396e  
osd.3                            ceph2               running (7h)      64s ago    3d    50.0M    4096M  16.2.5          6933c2a0b7dd  fa9549c63716  
osd.4                            ceph3               running (7h)      64s ago    3d    37.8M    4096M  16.2.5          6933c2a0b7dd  d1fc644bd8f6  
osd.5                            ceph1               running (7h)      61s ago    3d    38.5M    4096M  16.2.5          6933c2a0b7dd  ed81fef0dd1c  
osd.6                            ceph2               running (7h)      64s ago    3d    29.7M    4096M  16.2.5          6933c2a0b7dd  68d0dcad316b  
osd.7                            ceph3               running (7h)      64s ago    3d    28.4M    4096M  16.2.5          6933c2a0b7dd  d3dc04b1d1ff  
osd.8                            ceph1               running (7h)      61s ago    3d    38.4M    4096M  16.2.5          6933c2a0b7dd  13f0d2b109ff  
prometheus.ceph1                 ceph1  *:9095       running (7h)      61s ago    4d     131M        -  2.18.1          de242295e225  e5fb2f5703d5  
rbd-mirror.ceph1.rkchvq          ceph1               running (7h)      61s ago    8h    7453k        -  16.2.5          6933c2a0b7dd  e5ca469f9184  
rbd-mirror.ceph2.zdhnvt          ceph2               running (7h)      64s ago    8h    6505k        -  16.2.5          6933c2a0b7dd  2933e22fd669  
rbd-mirror.ceph3.mssyuu          ceph3               running (7h)      64s ago    8h    10.5M        -  16.2.5          6933c2a0b7dd  862ef07ae291  
rgw.rgw.ceph1.lgcvfw             ceph1  *:80         running (7h)      61s ago    8h    66.9M        -  16.2.5          6933c2a0b7dd  9002673bd55a  
rgw.rgw.ceph2.eqykjt             ceph2  *:80         running (7h)      64s ago    8h    67.9M        -  16.2.5          6933c2a0b7dd  dbdddd0f10cd  
rgw.rgw.ceph3.eybvwe             ceph3  *:80         running (7h)      64s ago    8h    69.0M        -  16.2.5          6933c2a0b7dd  b896b40ef1d8  

命令行查看部署的哪些服务。

[root@ceph1 ~]# ceph -s
  cluster:
    id:     36e7a21c-e3f7-11eb-8960-000c299df6ef
    health: HEALTH_WARN
            clock skew detected on mon.ceph2, mon.ceph3
 
  services:
    mon:           3 daemons, quorum ceph1,ceph2,ceph3 (age 58m)
    mgr:           ceph1.nwbihh(active, since 7h), standbys: ceph2.ednijf
    mds:           1/1 daemons up, 2 standby
    osd:           9 osds: 9 up (since 7h), 9 in (since 3d)
    cephfs-mirror: 3 daemons active (3 hosts)
    rbd-mirror:    3 daemons active (3 hosts)
    rgw:           3 daemons active (3 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   10 pools, 241 pgs
    objects: 233 objects, 9.4 KiB
    usage:   323 MiB used, 180 GiB / 180 GiB avail
    pgs:     241 active+clean
 
  io:
    client:   4.1 KiB/s rd, 4 op/s rd, 0 op/s wr
 


Ceph Dashboard中查看部署了哪些服务。

Cephadm全功能安装Ceph Pacific

image-20210717173522825

185.12 添加ingress

在前几个步骤中我们添加了RGW和NFS,并且都部署了3个Deamon,但其实三个还是独立服务的,也就是前端没有负载均衡实现统一访问,在Cephadm中将haproxy和keepalived两个封装成了ingress服务,rgw-ingress架构示意图如下:

Cephadm全功能安装Ceph Pacific

HAProxy_for_RGW

编写rgw-ingress放置规范及配置参数,并部署。

[root@ceph1 ~]# vi rgw-ingress.yaml 
service_type: ingress
service_id: rgw.rgw
placement:
  count: 3
spec:
  backend_service: rgw.rgw
  virtual_ip: 192.168.149.200/24
  frontend_port: 8080
  monitor_port: 1967
#ceph orch apply -i rgw-ingress.yaml 
Cephadm全功能安装Ceph Pacific

注意事项

注意这里的backend_service一定要通过ceph orch ls查看到实际的名称为准, frontend_port为VIP的端口,注意到这里有个 monitor_port这个的意思是haproxy 状态页端口。

Cephadm全功能安装Ceph Pacific

999

比如想验证 monitor_port的意思,可以进入haproxy容器中,查看haproxy配置文件frontend stats段落中有1967端口以及用户名和密码。

#podman exec -it 5e521b875eba bash
# cat /var/lib/haproxy/haproxy.cfg 
root@ceph1:/var/lib/haproxy# cat haproxy.cfg 
# This file is generated by cephadm.
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/lib/haproxy/haproxy.pid
    maxconn     8000
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout queue           20s
    timeout connect         5s
    timeout http-request    1s
    timeout http-keep-alive 5s
    timeout client          1s
    timeout server          1s
    timeout check           5s
    maxconn                 8000

frontend stats
    mode http
    bind *:1967
    stats enable
    stats uri /stats
    stats refresh 10s
    stats auth admin:vpxpoqmhxsetlmoerbck
    http-request use-service prometheus-exporter if { path /metrics }
    monitor-uri /health

frontend frontend
    bind *:8080
    default_backend backend

backend backend
    option forwardfor
    balance static-rr
    option httpchk HEAD / HTTP/1.0
    server rgw.rgw.ceph1.lgcvfw 192.168.149.128:80 check weight 100
    server rgw.rgw.ceph2.eqykjt 192.168.149.145:80 check weight 100
    server rgw.rgw.ceph3.eybvwe 192.168.149.146:80 check weight 100

根据得到的VIP、monitor_port、uri得出访问地址为http://192.168.149.201:1967/stats,通过浏览器打开该地址,输入用户和密码可以看到确实是haproxy状态页内容。

查看Service状态。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS                      RUNNING  REFRESHED  AGE   PLACEMENT          
alertmanager               ?:9093,9094                    1/1  67s ago    3d    count:1            
crash                                                     3/3  2m ago     3d    *                  
grafana                    ?:3000                         1/1  67s ago    3d    count:1                    
ingress.rgw.rgw            192.168.149.200:8080,1967      6/6  2m ago     17m   count:3            
iscsi.gw                                                  3/3  2m ago     76m   ceph1;ceph2;ceph3  
mds.cephfs                                                3/3  2m ago     84m   count:3            
mgr                                                       2/2  2m ago     3d    count:2            
mon                                                       3/5  2m ago     3d    count:5            
nfs.nfs                                                   3/3  2m ago     81m   count:3            
node-exporter              ?:9100                         3/3  2m ago     3d    *                  
osd.all-available-devices                                9/12  2m ago     3d    *                  
prometheus                 ?:9095                         1/1  67s ago    3d    count:1            
rbd-mirror                                                3/3  2m ago     73m   count:3            
rgw.rgw                    ?:80                           3/3  2m ago     105m  count:3            

查看Deamon状态。

[root@ceph1 ~]# ceph orch ps
NAME                             HOST   PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION         IMAGE ID      ConTAINER ID  
alertmanager.ceph1               ceph1  *:9093,9094  running (7h)      7m ago   4d    31.5M        -  0.20.0          0881eb8f169f  b9640288c91f  
cephfs-mirror.ceph1.cclxku       ceph1               running (28m)     7m ago  28m    16.5M        -  16.2.5          6933c2a0b7dd  88b6f8918aa8  
cephfs-mirror.ceph2.ozwgqg       ceph2               running (29m)     7m ago  29m    13.6M        -  16.2.5          6933c2a0b7dd  dcd8c6dcc4f0  
cephfs-mirror.ceph3.iefmko       ceph3               running (30m)     7m ago  30m    14.2M        -  16.2.5          6933c2a0b7dd  581ba61bad28  
crash.ceph1                      ceph1               running (7h)      7m ago   4d    1929k        -  16.2.5          6933c2a0b7dd  116ff5ce3646  
crash.ceph2                      ceph2               running (7h)      7m ago   3d    3451k        -  16.2.5          6933c2a0b7dd  354b8903892b  
crash.ceph3                      ceph3               running (7h)      7m ago   3d    4047k        -  16.2.5          6933c2a0b7dd  a5c223e5362c  
grafana.ceph1                    ceph1  *:3000       running (7h)      7m ago   4d    44.0M        -  6.7.4           ae5c36c3d3cd  d284162e0da4  
haproxy.nfs.nfs.ceph1.bdmrwh     ceph1  *:2050,1968  running (7h)      7m ago   7h    1870k        -  2.3.12-b99e499  b2284eda2221  3d9c640d828e  
haproxy.nfs.nfs.ceph2.ryaaaq     ceph2  *:2050,1968  running (7h)      7m ago   7h    1862k        -  2.3.12-b99e499  b2284eda2221  cffaa94c6c4a  
haproxy.nfs.nfs.ceph3.ysfamh     ceph3  *:2050,1968  running (7h)      7m ago   7h    1904k        -  2.3.12-b99e499  b2284eda2221  0e48492c6233  
haproxy.rgw.rgw.ceph1.nmvtig     ceph1  *:8080,1967  running (7h)      7m ago   7h    6089k        -  2.3.12-b99e499  b2284eda2221  5e521b875eba  
haproxy.rgw.rgw.ceph2.bbcnso     ceph2  *:8080,1967  running (7h)      7m ago   7h    2877k        -  2.3.12-b99e499  b2284eda2221  cbd03c96a1fd  
haproxy.rgw.rgw.ceph3.bcvvxl     ceph3  *:8080,1967  running (7h)      7m ago   7h    4265k        -  2.3.12-b99e499  b2284eda2221  535a37749437  
iscsi.gw.ceph1.esngze            ceph1               running (7h)      7m ago   8h    25.5M        -  3.5             6933c2a0b7dd  04d1bfd141a9  
iscsi.gw.ceph2.ypjkrx            ceph2               running (7h)      7m ago   8h    22.3M        -  3.5             6933c2a0b7dd  7f78b9c0deb0  
iscsi.gw.ceph3.hntxjs            ceph3               running (7h)      7m ago   8h    29.7M        -  3.5             6933c2a0b7dd  2fe9bb0b7bfe  
keepalived.nfs.nfs.ceph1.jprcvw  ceph1               running (7h)      7m ago   7h    3837k        -  2.0.5           073e0c3cd1b9  8e337e634255  
keepalived.nfs.nfs.ceph2.oiynik  ceph2               running (7h)      7m ago   7h    3468k        -  2.0.5           073e0c3cd1b9  a6d10588190e  
keepalived.nfs.nfs.ceph3.guulfc  ceph3               running (7h)      7m ago   7h    1732k        -  2.0.5           073e0c3cd1b9  23680bf04f55  
keepalived.rgw.rgw.ceph1.keriqf  ceph1               running (7h)      7m ago   7h    2667k        -  2.0.5           073e0c3cd1b9  a11d221ae1af  
keepalived.rgw.rgw.ceph2.gxshhg  ceph2               running (7h)      7m ago   7h    2969k        -  2.0.5           073e0c3cd1b9  59026981ef10  
keepalived.rgw.rgw.ceph3.tqaixq  ceph3               running (7h)      7m ago   7h    4600k        -  2.0.5           073e0c3cd1b9  25cfdab23bfe  
mds.cephfs.ceph1.ooraiz          ceph1               running (7h)      7m ago   8h    19.3M        -  16.2.5          6933c2a0b7dd  64246a825e21  
mds.cephfs.ceph2.qfmprj          ceph2               running (7h)      7m ago   8h    6790k        -  16.2.5          6933c2a0b7dd  f0faa5b9a7d5  
mds.cephfs.ceph3.ifskba          ceph3               running (7h)      7m ago   8h    10.0M        -  16.2.5          6933c2a0b7dd  7f294af518f7  
mgr.ceph1.nwbihh                 ceph1  *:9283       running (7h)      7m ago   4d     119M        -  16.2.5          6933c2a0b7dd  f7b2470c5797  
mgr.ceph2.ednijf                 ceph2  *:8443,9283  running (7h)      7m ago   3d    30.2M        -  16.2.5          6933c2a0b7dd  e24b40d99e6d  
mon.ceph1                        ceph1               running (7h)      7m ago   4d     526M    2048M  16.2.5          6933c2a0b7dd  9936a8b6587b  
mon.ceph2                        ceph2               running (7h)      7m ago   3d     898M    2048M  16.2.5          6933c2a0b7dd  abe7eed2d100  
mon.ceph3                        ceph3               running (7h)      7m ago   3d     894M    2048M  16.2.5          6933c2a0b7dd  52d747e3d011  
nfs.nfs.0.0.ceph1.ufsfpf         ceph1  *:2049       running (7h)      7m ago   8h    23.0M        -  3.5             6933c2a0b7dd  fff5327b8415  
nfs.nfs.1.0.ceph2.blgraz         ceph2  *:2049       running (7h)      7m ago   8h    31.3M        -  3.5             6933c2a0b7dd  ea09e2429950  
nfs.nfs.2.0.ceph3.edjscz         ceph3  *:2049       running (7h)      7m ago   8h    38.3M        -  3.5             6933c2a0b7dd  dcc6fd5aa85f  
node-exporter.ceph1              ceph1  *:9100       running (7h)      7m ago   4d    17.8M        -  0.18.1          e5a616e4b9cf  69e5d1f3e310  
node-exporter.ceph2              ceph2  *:9100       running (7h)      7m ago   3d    18.3M        -  0.18.1          e5a616e4b9cf  cd41893212ca  
node-exporter.ceph3              ceph3  *:9100       running (7h)      7m ago   3d    16.9M        -  0.18.1          e5a616e4b9cf  9ab2b99dabd6  
osd.0                            ceph2               running (7h)      7m ago   3d    34.2M    4096M  16.2.5          6933c2a0b7dd  0c9dc80a1d74  
osd.1                            ceph3               running (7h)      7m ago   3d    56.3M    4096M  16.2.5          6933c2a0b7dd  c698fa4c50c0  
osd.2                            ceph1               running (7h)      7m ago   3d    44.1M    4096M  16.2.5          6933c2a0b7dd  69501861396e  
osd.3                            ceph2               running (7h)      7m ago   3d    49.1M    4096M  16.2.5          6933c2a0b7dd  fa9549c63716  
osd.4                            ceph3               running (7h)      7m ago   3d    36.3M    4096M  16.2.5          6933c2a0b7dd  d1fc644bd8f6  
osd.5                            ceph1               running (7h)      7m ago   3d    42.7M    4096M  16.2.5          6933c2a0b7dd  ed81fef0dd1c  
osd.6                            ceph2               running (7h)      7m ago   3d    31.6M    4096M  16.2.5          6933c2a0b7dd  68d0dcad316b  
osd.7                            ceph3               running (7h)      7m ago   3d    29.7M    4096M  16.2.5          6933c2a0b7dd  d3dc04b1d1ff  
osd.8                            ceph1               running (7h)      7m ago   3d    40.7M    4096M  16.2.5          6933c2a0b7dd  13f0d2b109ff  
prometheus.ceph1                 ceph1  *:9095       running (7h)      7m ago   4d     107M        -  2.18.1          de242295e225  e5fb2f5703d5  
rbd-mirror.ceph1.rkchvq          ceph1               running (7h)      7m ago   8h    9223k        -  16.2.5          6933c2a0b7dd  e5ca469f9184  
rbd-mirror.ceph2.zdhnvt          ceph2               running (7h)      7m ago   8h    8652k        -  16.2.5          6933c2a0b7dd  2933e22fd669  
rbd-mirror.ceph3.mssyuu          ceph3               running (7h)      7m ago   8h    12.6M        -  16.2.5          6933c2a0b7dd  862ef07ae291  
rgw.rgw.ceph1.lgcvfw             ceph1  *:80         running (7h)      7m ago   9h    73.1M        -  16.2.5          6933c2a0b7dd  9002673bd55a  
rgw.rgw.ceph2.eqykjt             ceph2  *:80         running (7h)      7m ago   9h    67.5M        -  16.2.5          6933c2a0b7dd  dbdddd0f10cd  
rgw.rgw.ceph3.eybvwe             ceph3  *:80         running (7h)      7m ago   9h    67.5M        -  16.2.5          6933c2a0b7dd  b896b40ef1d8  

ingress是每个服务对应一个ingress,比如上面部署了rgw ingress,nfs也有对应的ingress,未来还会有Ceph Dashboard的ingress,现在能部署的只有rgw和nfs ingress,nfs ingress示意图如下:

Cephadm全功能安装Ceph Pacific

HAProxy_for_NFS

编辑nfs ingress配置文件并部署。

[root@ceph1 ~]# cat nfs-ingress.yaml 
service_type: ingress
service_id: nfs.nfs
placement:
  count: 3
spec:
  backend_service: nfs.nfs
  virtual_ip: 192.168.149.201/24
  frontend_port: 2050
  monitor_port: 1968
[root@ceph1 ~]#ceph orch apply -i  rgw-ingress.yaml 

查看Service状态。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS                      RUNNING  REFRESHED  AGE  PLACEMENT          
alertmanager               ?:9093,9094                    1/1  91s ago    4d   count:1            
cephfs-mirror                                             3/3  94s ago    32m  count:3            
crash                                                     3/3  94s ago    4d   *                  
grafana                    ?:3000                         1/1  91s ago    4d   count:1            
ingress.nfs.nfs            192.168.149.201:2050,1968      6/6  94s ago    7h   count:3            
ingress.rgw.rgw            192.168.149.200:8080,1967      6/6  94s ago    7h   count:3            
iscsi.gw                                                  3/3  94s ago    8h   ceph1;ceph2;ceph3  
mds.cephfs                                                3/3  94s ago    8h   count:3            
mgr                                                       2/2  94s ago    4d   count:2            
mon                                                       3/5  94s ago    4d   count:5            
nfs.nfs                                                   3/3  94s ago    8h   count:3            
node-exporter              ?:9100                         3/3  94s ago    4d   *                  
osd.all-available-devices                                9/12  94s ago    3d   *                  
prometheus                 ?:9095                         1/1  91s ago    4d   count:1            
rbd-mirror                                                3/3  94s ago    8h   count:3            
rgw.rgw                    ?:80                           3/3  94s ago    9h   count:3        

查看Deamon状态。

[root@ceph1 ~]# ceph orch ps
NAME                             HOST   PORTS        STATUS          REFRESHED   AGE  MEM USE  MEM LIM  VERSION         IMAGE ID      ConTAINER ID  
alertmanager.ceph1               ceph1  *:9093,9094  running (4m)      13s ago    3d    22.7M        -  0.20.0          0881eb8f169f  b9640288c91f  
crash.ceph1                      ceph1               running (4m)      13s ago    3d    2721k        -  16.2.5          6933c2a0b7dd  116ff5ce3646  
crash.ceph2                      ceph2               running (4m)      29s ago    3d    17.1M        -  16.2.5          6933c2a0b7dd  354b8903892b  
crash.ceph3                      ceph3               running (4m)      29s ago    3d    7792k        -  16.2.5          6933c2a0b7dd  a5c223e5362c  
grafana.ceph1                    ceph1  *:3000       running (4m)      13s ago    3d    52.5M        -  6.7.4           ae5c36c3d3cd  d284162e0da4  
haproxy.nfs.nfs.ceph1.bdmrwh     ceph1  *:2050,1968  running (64s)     13s ago   63s    5603k        -  2.3.12-b99e499  b2284eda2221  3d9c640d828e  
haproxy.nfs.nfs.ceph2.ryaaaq     ceph2  *:2050,1968  running (61s)     29s ago   61s    6341k        -  2.3.12-b99e499  b2284eda2221  cffaa94c6c4a  
haproxy.nfs.nfs.ceph3.ysfamh     ceph3  *:2050,1968  running (59s)     29s ago   59s    6353k        -  2.3.12-b99e499  b2284eda2221  0e48492c6233  
haproxy.rgw.rgw.ceph1.nmvtig     ceph1  *:8080,1967  running (2m)      13s ago    2m    13.9M        -  2.3.12-b99e499  b2284eda2221  5e521b875eba  
haproxy.rgw.rgw.ceph2.bbcnso     ceph2  *:8080,1967  running (2m)      29s ago    2m    4916k        -  2.3.12-b99e499  b2284eda2221  cbd03c96a1fd  
haproxy.rgw.rgw.ceph3.bcvvxl     ceph3  *:8080,1967  running (2m)      29s ago    2m    4362k        -  2.3.12-b99e499  b2284eda2221  535a37749437  
iscsi.gw.ceph1.esngze            ceph1               running (4m)      13s ago   61m    49.2M        -  3.5             6933c2a0b7dd  04d1bfd141a9  
iscsi.gw.ceph2.ypjkrx            ceph2               running (4m)      29s ago   61m    62.3M        -  3.5             6933c2a0b7dd  7f78b9c0deb0  
iscsi.gw.ceph3.hntxjs            ceph3               running (4m)      29s ago   61m    64.5M        -  3.5             6933c2a0b7dd  2fe9bb0b7bfe  
keepalived.nfs.nfs.ceph1.jprcvw  ceph1               running (53s)     13s ago   52s    2679k        -  2.0.5           073e0c3cd1b9  8e337e634255  
keepalived.nfs.nfs.ceph2.oiynik  ceph2               running (57s)     29s ago   56s    1581k        -  2.0.5           073e0c3cd1b9  a6d10588190e  
keepalived.nfs.nfs.ceph3.guulfc  ceph3               running (50s)     29s ago   50s    1623k        -  2.0.5           073e0c3cd1b9  23680bf04f55  
keepalived.rgw.rgw.ceph1.keriqf  ceph1               running (84s)     13s ago   83s    1979k        -  2.0.5           073e0c3cd1b9  a11d221ae1af  
keepalived.rgw.rgw.ceph2.gxshhg  ceph2               running (105s)    29s ago  105s    6455k        -  2.0.5           073e0c3cd1b9  59026981ef10  
keepalived.rgw.rgw.ceph3.tqaixq  ceph3               running (67s)     29s ago   66s    1581k        -  2.0.5           073e0c3cd1b9  25cfdab23bfe  
mds.cephfs.ceph1.ooraiz          ceph1               running (4m)      13s ago   69m    16.5M        -  16.2.5          6933c2a0b7dd  64246a825e21  
mds.cephfs.ceph2.qfmprj          ceph2               running (4m)      29s ago   69m    19.6M        -  16.2.5          6933c2a0b7dd  f0faa5b9a7d5  
mds.cephfs.ceph3.ifskba          ceph3               running (4m)      29s ago   69m    22.0M        -  16.2.5          6933c2a0b7dd  7f294af518f7  
mgr.ceph1.nwbihh                 ceph1  *:9283       running (4m)      13s ago    3d     439M        -  16.2.5          6933c2a0b7dd  f7b2470c5797  
mgr.ceph2.ednijf                 ceph2  *:8443,9283  running (4m)      29s ago    3d     412M        -  16.2.5          6933c2a0b7dd  e24b40d99e6d  
mon.ceph1                        ceph1               running (4m)      13s ago    3d    86.0M    2048M  16.2.5          6933c2a0b7dd  9936a8b6587b  
mon.ceph2                        ceph2               running (4m)      29s ago    3d     113M    2048M  16.2.5          6933c2a0b7dd  abe7eed2d100  
mon.ceph3                        ceph3               running (4m)      29s ago    3d     127M    2048M  16.2.5          6933c2a0b7dd  52d747e3d011  
nfs.nfs.0.0.ceph1.ufsfpf         ceph1  *:2049       running (4m)      13s ago   66m    64.5M        -  3.5             6933c2a0b7dd  fff5327b8415  
nfs.nfs.1.0.ceph2.blgraz         ceph2  *:2049       running (4m)      29s ago   66m    72.0M        -  3.5             6933c2a0b7dd  ea09e2429950  
nfs.nfs.2.0.ceph3.edjscz         ceph3  *:2049       running (4m)      29s ago   66m    58.7M        -  3.5             6933c2a0b7dd  dcc6fd5aa85f  
node-exporter.ceph1              ceph1  *:9100       running (4m)      13s ago    3d    15.0M        -  0.18.1          e5a616e4b9cf  69e5d1f3e310  
node-exporter.ceph2              ceph2  *:9100       running (4m)      29s ago    3d    24.0M        -  0.18.1          e5a616e4b9cf  cd41893212ca  
node-exporter.ceph3              ceph3  *:9100       running (4m)      29s ago    3d    24.7M        -  0.18.1          e5a616e4b9cf  9ab2b99dabd6  
osd.0                            ceph2               running (4m)      29s ago    3d    66.2M    4096M  16.2.5          6933c2a0b7dd  0c9dc80a1d74  
osd.1                            ceph3               running (4m)      29s ago    3d    70.5M    4096M  16.2.5          6933c2a0b7dd  c698fa4c50c0  
osd.2                            ceph1               running (4m)      13s ago    3d    42.4M    4096M  16.2.5          6933c2a0b7dd  69501861396e  
osd.3                            ceph2               running (4m)      29s ago    3d    54.2M    4096M  16.2.5          6933c2a0b7dd  fa9549c63716  
osd.4                            ceph3               running (4m)      29s ago    3d    84.9M    4096M  16.2.5          6933c2a0b7dd  d1fc644bd8f6  
osd.5                            ceph1               running (4m)      13s ago    3d    36.8M    4096M  16.2.5          6933c2a0b7dd  ed81fef0dd1c  
osd.6                            ceph2               running (4m)      29s ago    3d    58.6M    4096M  16.2.5          6933c2a0b7dd  68d0dcad316b  
osd.7                            ceph3               running (4m)      29s ago    3d    65.3M    4096M  16.2.5          6933c2a0b7dd  d3dc04b1d1ff  
osd.8                            ceph1               running (4m)      13s ago    3d    42.9M    4096M  16.2.5          6933c2a0b7dd  13f0d2b109ff  
prometheus.ceph1                 ceph1  *:9095       running (35s)     13s ago    3d     110M        -  2.18.1          de242295e225  e5fb2f5703d5  
rbd-mirror.ceph1.rkchvq          ceph1               running (4m)      13s ago   58m    15.3M        -  16.2.5          6933c2a0b7dd  e5ca469f9184  
rbd-mirror.ceph2.zdhnvt          ceph2               running (4m)      29s ago   58m    24.7M        -  16.2.5          6933c2a0b7dd  2933e22fd669  
rbd-mirror.ceph3.mssyuu          ceph3               running (4m)      29s ago   58m    33.4M        -  16.2.5          6933c2a0b7dd  862ef07ae291  
rgw.rgw.ceph1.lgcvfw             ceph1  *:80         running (4m)      13s ago   91m    58.0M        -  16.2.5          6933c2a0b7dd  9002673bd55a  
rgw.rgw.ceph2.eqykjt             ceph2  *:80         running (4m)      29s ago   91m    75.6M        -  16.2.5          6933c2a0b7dd  dbdddd0f10cd  
rgw.rgw.ceph3.eybvwe             ceph3  *:80         running (4m)      29s ago   91m    75.7M        -  16.2.5          6933c2a0b7dd  b896b40ef1d8  

6.排错

当部署出现问题可以执行以下命令查看详细信息。

# ceph log last cephadm

也可以直接查看Service级别或Daemon级别的日志。

#ceph orch ls --service_name=alertmanager --format yaml
#ceph orch ps --service-name  --daemon-id  --format yaml

当daemon出现error,或是stop状态,可以使用以下命令启动

ceph orch daemon restart rgw.rgw.ceph3.sfepof

当有多个Daemon状态不对时,也可以直接重启Service,就会自动重启关联的Daemon。

[root@ceph1 ~]# ceph orch start mds.cephfs
Scheduled to start mds.cephfs.ceph1.znbbqq on host 'ceph1'
Scheduled to start mds.cephfs.ceph2.iazuaf on host 'ceph2'
Scheduled to start mds.cephfs.ceph3.hjuvue on host 'ceph3'

如果删除服务遇到一直在删除中,可以重启电脑。

[root@ceph1 ~]# ceph orch ls
NAME                       PORTS                      RUNNING  REFRESHED   AGE  PLACEMENT          
alertmanager               ?:9093,9094                    1/1  18m ago     10d  count:1            
cephfs-mirror                                             3/3  18m ago     2d   count:3            
crash                                                     5/5  19m ago     10d  *                  
grafana                    ?:3000                         1/1  18m ago     10d  count:1            
ingress.mgr                192.168.149.200:8443,8443      0/4    2h   ceph1;ceph2