对于【Ceph浅析笔记】Ceph的逻辑结构感兴趣的读者,本文将会是一篇不错的选择,我们将详细介绍ceph原理架构,并为您提供关于Ceph学习笔记2-在Kolla-Ansible中使用Ceph后端存储、
对于【Ceph浅析笔记】Ceph的逻辑结构感兴趣的读者,本文将会是一篇不错的选择,我们将详细介绍ceph原理架构,并为您提供关于Ceph 学习笔记 2 - 在 Kolla-Ansible 中使用 Ceph 后端存储、Ceph 文件系统-全网最炫酷的Ceph Dashboard页面和Ceph监控 -- <5>、ceph0.80 安装及使用(CentOS7/ceph-deploy)、Cephfs+Samba构建基于Ceph的文件共享服务的有用信息。
本文目录一览:- 【Ceph浅析笔记】Ceph的逻辑结构(ceph原理架构)
- Ceph 学习笔记 2 - 在 Kolla-Ansible 中使用 Ceph 后端存储
- Ceph 文件系统-全网最炫酷的Ceph Dashboard页面和Ceph监控 -- <5>
- ceph0.80 安装及使用(CentOS7/ceph-deploy)
- Cephfs+Samba构建基于Ceph的文件共享服务
【Ceph浅析笔记】Ceph的逻辑结构(ceph原理架构)
Ceph的结构
在【Ceph浅析笔记】Ceph是什么.md里面我们介绍了Ceph的基本思想。下面我们先来介绍一下Ceph的基本结构。
-
基础存储系统RADOS
最底层是数据真正存放的地方,物理上由大量的节点所构成,然后在上面加了一个中间层,可以实现Ceph的可靠性、自动化、可扩展等特性,所有我们将之称为RADOS(Reliable,Autonomic,Distributed Object Store)
-
librados 然后我们希望能对客户透明,也就是用户不需要关心底层如何实现的,只需要直接在Ceph上进行开发。所以又加了一堆库函数librados。
这些库函数与应用一般来说在同一台节点上,所以也被称为本地API
-
Restful API
由于Ceph是基于C++开发的,那么librados提供的结构也是C或者C++的。
而且我们也希望Ceph能于Amazon S3和Swift这些分布式系统所兼容,所以可以再在上面加一个中间层,比如RADOS GW, RDD,Ceph FS。
比如说RADOS GW,本质就是一个网关,也就是可以提供协议的转换,这样对外就可以兼容S3和Swift的了。
RBD,全称是Reliable Block Device,也就是一个块设备接口,这样上层的操作系统看到的其实就是裸硬盘。
有了块存储接口,当然也有文件系统接口,Ceph FS就是一个POSIX兼容的分布式文件系统。
那么librados API和RADOS GW有啥区别呢?
抽象程度不一样,也就是对应的场景不同而已。librados更偏底层,允许开发者对存储的对象的状态进行提取,这样用户可以进行深度定制。
而RADOS GW屏蔽了很多细节,它主要是针对于应用开发者的,所以有用户账户、存储数据的容器、数据对象的概念,适合于常见的WEb对象存储应用开发。
RADOS 的逻辑结构
上一章主要介绍了Ceph的分层架构,那么里面最重要最底层的RADOS是我们接下来介绍的重点。
首先我们来介绍一下RADOS里面的几个角色
-
Clients
顾名思义,就是客户端,它可以是一个程序,也可能是命令行,反正用户必须通过Clients程序与存储节点打交道。
-
OSD(对象存储设备)
我们把存储数据的节点叫OSD,实际上OSD是一台安装了操作系统和文件系统的Server,一般来说,一个OSD至少包含了单核CPU、内存、一块硬盘、一张网卡等。但是事实上一台这么小的Server几乎找不到,所以我们可以把若干OSD部署在更大的服务器上。
每个OSD都有一个deamon,它的作用是介绍Client的访问连接,与monitor以及其他的OSD通信,与其他的OSD工程进行数据存储维护等。也就是说deamon完成了OSD的逻辑功能
-
monitor:
主要用来进行系统状态检测和维护。OSD会与monitor交互节点状态信息,形成全局的元数据,也即Cluster map。使用这个Cluter map就可以得到数据存放的地点。
对于传统的分布式存储,一般来说会有一个单独的元数据服务器,存放数据块与节点的映射关系,缺点是性能受限于此元数据服务器。而RADOS系统中,Client与OSD以及monitor交互获得Cluster Map,并存放于本地,然后可以在本地进行计算,获得对象存储位置。很显然避开了元数据服务器,不需要再进行查表操作了。
但是Cluster Map不是一成不变的,当OSD出现故障或者说有新的OSD加入的时候,Cluster Map应该进行更新,但是这种事件的频率远远低于Client对数据的访问频率。
Ceph 学习笔记 2 - 在 Kolla-Ansible 中使用 Ceph 后端存储
环境说明
- 使用
Kolla-Ansible
请参考《使用Kolla-Ansible
在CentOS
7
单节点上部署OpenStack
Pike
》; - 部署
Ceph
服务请参考《Ceph
学习笔记1
-Mimic
版本多节点部署》。
配置 Ceph
- 以
osdev
用户登录:
$ ssh osdev@osdev01
$ cd /opt/ceph/deploy/
创建 Pool
创建镜像 Pool
- 用于保存
Glance
镜像:
$ ceph osd pool create images 32 32
pool ''images'' created
创建卷 Pool
- 用于保存
Cinder
的卷:
$ ceph osd pool create volumes 32 32
pool ''volumes'' created
- 用于保存
Cinder
的卷备份:
$ ceph osd pool create backups 32 32
pool ''backups'' created
创建虚拟机 Pool
- 用于保存虚拟机系统卷:
$ ceph osd pool create vms 32 32
pool ''vms'' created
查看 Pool
$ ceph osd lspools
1 .rgw.root
2 default.rgw.control
3 default.rgw.meta
4 default.rgw.log
6 rbd
8 images
9 volumes
10 backups
11 vms
创建用户
查看用户
- 查看所有用户:
$ ceph auth list
installed auth entries:
mds.osdev01
key: AQCabn5b18tHExAAkZ6Aq3IQ4/aqYEBBey5O3Q==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
mds.osdev02
key: AQCbbn5bcq4yJRAAUfhoqPNfyp2m/ORu/7vHBA==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
mds.osdev03
key: AQCcbn5bTAIdORAApGu9NJvC3AmS+L3EWXLMdw==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
osd.0
key: AQCyJH5bG2ZBHRAAsDaLHcoOxv/mLCHwITA7JQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQDTJH5bjvQ8HxAA4cyLttvZwiqFq1srFoSXWg==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQD9JH5bbPi6IRAA7DbwaCh5JBaa6RfWPoe9VQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQA1In5boIRwGBAAgj5OccvTGYkuB+btlgL0BQ==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQA1In5bS6pwGBAA379v3LXJrdURLmA1gnTaLQ==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQA1In5bnMpwGBAAXohUfa4rGS0Rd2weMl4dPg==
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
key: AQA1In5buelwGBAANQSalrSzH3yslSc4rYPu1g==
caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rgw
key: AQA1In5b0ghxGBAAIGK3WmBSkKZMnSEfvnEQow==
caps: [mon] allow profile bootstrap-rgw
client.rgw.osdev01
key: AQDZbn5b6aChEBAAzRuX4UWlxyws+aX1i+D26Q==
caps: [mon] allow rw
caps: [osd] allow rwx
client.rgw.osdev02
key: AQDabn5bypCDJBAAt18L5ppG5lEg6NkGQLYs5w==
caps: [mon] allow rw
caps: [osd] allow rwx
client.rgw.osdev03
key: AQDbbn5bbEVNNBAArX+/AKQu9q3hCRn/05Ya3A==
caps: [mon] allow rw
caps: [osd] allow rwx
mgr.osdev01
key: AQDPIn5beqPTORAAEzcX3fMCCclLR2RiPyvugw==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *
mgr.osdev02
key: AQDRIn5bLRVqDxAA/yWXO8pX6fQynJNyCcoNww==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *
mgr.osdev03
key: AQDSIn5bGyrhHxAAvtAEOveovRxmdDlF45i2Cg==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *
- 查看指定用户:
$ ceph auth get client.admin
exported keyring for client.admin
[client.admin]
key = AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
创建 Glance 用户
- 创建
glance
用户,并授予images
存储池访问权限:
$ ceph auth get-or-create client.glance
[client.glance]
key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==
$ ceph auth caps client.glance mon ''allow r'' osd ''allow rwx pool=images''
updated caps for client.glance
- 查看并保存
glance
用户的KeyRing
文件:
$ ceph auth get client.glance
exported keyring for client.glance
[client.glance]
key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==
caps mon = "allow r"
caps osd = "allow rwx pool=images"
$ ceph auth get client.glance -o /opt/ceph/deploy/ceph.client.glance.keyring
exported keyring for client.glance
创建 Cinder 用户
- 创建
cinder-volume
用户,并授予volumes
存储池访问权限:
$ ceph auth get-or-create client.cinder-volume
[client.cinder-volume]
key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==
$ ceph auth caps client.cinder-volume mon ''allow r'' osd ''allow rwx pool=volumes''
updated caps for client.cinder-volume
- 查看并保存
cinder-volume
用户的KeyRing
文件:
$ ceph auth get client.cinder-volume
exported keyring for client.cinder-volume
[client.cinder-volume]
key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==
caps mon = "allow r"
caps osd = "allow rwx pool=volumes"
$ ceph auth get client.cinder-volume -o /opt/ceph/deploy/ceph.client.cinder-volume.keyring
exported keyring for client.cinder-volume
- 创建
cinder-backup
用户,并授予volumes
和backups
存储池访问权限:
$ ceph auth get-or-create client.cinder-backup
[client.cinder-backup]
key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==
$ ceph auth caps client.cinder-backup mon ''allow r'' osd ''allow rwx pool=volumes, allow rwx pool=backups''
updated caps for client.cinder-backup
- 查看并保存
cinder-backup
用户的KeyRing
文件:
$ ceph auth get client.cinder-backup
exported keyring for client.cinder-backup
[client.cinder-backup]
key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==
caps mon = "allow r"
caps osd = "allow rwx pool=volumes, allow rwx pool=backups"
$ ceph auth get client.cinder-backup -o /opt/ceph/deploy/ceph.client.cinder-backup.keyring
exported keyring for client.cinder-backup
创建 Nova 用户
- 创建
nova
用户,并授予vms
存储池的访问权限:
$ ceph auth get-or-create client.nova
[client.nova]
key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==
$ ceph auth caps client.nova mon ''allow r'' osd ''allow rwx pool=vms''
updated caps for client.nova
- 查看并保存
nova
用户的KeyRing
文件:
$ ceph auth get client.nova
exported keyring for client.nova
[client.nova]
key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==
caps mon = "allow r"
caps osd = "allow rwx pool=vms"
$ ceph auth get client.nova -o /opt/ceph/deploy/ceph.client.nova.keyring
exported keyring for client.nova
配置 Kolla-Ansible
- 以
root
用户身份登录osdev01
部署节点,并设置好环境变量:
$ ssh root@osdev01
$ export KOLLA_ROOT=/opt/kolla
$ cd ${KOLLA_ROOT}/myconfig
全局配置
- 编辑
globals.yml
,禁止部署Ceph
:
enable_ceph: "no"
- 开启
Cinder
服务,并开启Glance
、Cinder
和Nova
的后端Ceph
功能:
enable_cinder: "yes"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
配置 Glance
- 配置
Glance
使用glance
用户使用Ceph
的images
存储池:
$ mkdir -pv config/glance
mkdir: 已创建目录 "config/glance"
$ vi config/glance/glance-api.conf
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
- 新增
Glance
的Ceph
客户端配置和glance
用户的KeyRing
文件:
$ vi config/glance/ceph.conf
[global]
fsid = 383237bd-becf-49d5-9bd6-deb0bc35ab2a
mon_initial_members = osdev01, osdev02, osdev03
mon_host = 172.29.101.166,172.29.101.167,172.29.101.168
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
$ cp -v /opt/ceph/deploy/ceph.client.glance.keyring config/glance/ceph.client.glance.keyring
"/opt/ceph/deploy/ceph.client.glance.keyring" -> "config/glance/ceph.client.glance.keyring"
配置 Cinder
- 配置
Cinder
卷服务使用Ceph
的cinder-volume
用户使用volumes
存储池,Cinder
卷备份服务使用Ceph
的cinder-backup
用户使用backups
存储池:
$ mkdir -pv config/cinder/
mkdir: 已创建目录 "config/cinder/"
$ vi config/cinder/cinder-volume.conf
[DEFAULT]
enabled_backends=rbd-1
[rbd-1]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder-volume
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=rbd-1
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
$ vi config/cinder/cinder-backup.conf
[DEFAULT]
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool=backups
backup_driver = cinder.backup.drivers.ceph
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
- 新增
Cinder
的卷服务和卷备份服务的Ceph
客户端配置和KeyRing
文件:
$ cp config/glance/ceph.conf config/cinder/ceph.conf
$ mkdir -pv config/cinder/cinder-backup/ config/cinder/cinder-volume/
mkdir: 已创建目录 "config/cinder/cinder-backup/"
mkdir: 已创建目录 "config/cinder/cinder-volume/"
$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-backup/ceph.client.cinder-volume.keyring
"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-volume.keyring"
$ cp -v /opt/ceph/deploy/ceph.client.cinder-backup.keyring config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
"/opt/ceph/deploy/ceph.client.cinder-backup.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-backup.keyring"
$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-volume/ceph.client.cinder-volume.keyring
"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-volume/ceph.client.cinder.keyring"
配置 Nova
- 配置
Nova
使用Ceph
的nova
用户使用vms
存储池:
$ vi config/nova/nova-compute.conf
[libvirt]
images_rbd_pool=vms
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova
- 新增 Nova 的
Ceph
客户端配置和nova
用户的KeyRing
文件:
$ cp -v config/glance/ceph.conf config/nova/ceph.conf
"config/glance/ceph.conf" -> "config/nova/ceph.conf"
$ cp -v /opt/ceph/deploy/ceph.client.nova.keyring config/nova/ceph.client.nova.keyring
"/opt/ceph/deploy/ceph.client.nova.keyring" -> "config/nova/ceph.client.nova.keyring"
$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/nova/ceph.client.cinder.keyring
"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/nova/ceph.client.cinder.keyring"
部署测试
开始部署
- 编辑部署脚本
osdev.sh
:
#!/bin/bash
set -uexv
usage()
{
echo -e "usage : \n$0 <action>"
echo -e " \$1 action"
}
if [ $# -lt 1 ]; then
usage
exit 1
fi
${KOLLA_ROOT}/kolla-ansible/tools/kolla-ansible --configdir ${KOLLA_ROOT}/myconfig --passwords ${KOLLA_ROOT}/myconfig/passwords.yml --inventory ${KOLLA_ROOT}/myconfig/mynodes.conf $1
- 增加可执行权限:
$ chmod a+x osdev.sh
- 部署
OpenStack
集群:
$ ./osdev.sh bootstrap-servers
$ ./osdev.sh prechecks
$ ./osdev.sh pull
$ ./osdev.sh deploy
$ ./osdev.sh post-deploy
# ./osdev.sh "destroy --yes-i-really-really-mean-it"
- 查看部署的服务概况:
$ openstack service list
+----------------------------------+-------------+----------------+
| ID | Name | Type |
+----------------------------------+-------------+----------------+
| 304c9c5073f14f4a97ca1c3cf5e1b49e | neutron | network |
| 46de4440a5cf4a5697fa94b2d0424ba9 | heat | orchestration |
| 60b46b491ce7403aaec0c064384dde49 | heat-cfn | cloudformation |
| 7726ab5d41c5450d954f073f1a9aff28 | cinderv2 | volumev2 |
| 7a4bd5fc12904cc7b5c3810412f98c57 | gnocchi | metric |
| 7ae6f98018fb4d509e862e45ebf10145 | glance | image |
| a0ec333149284c09ac0e157753205fd6 | nova | compute |
| b15e90c382864723945b15c37d3317a6 | placement | placement |
| b5eaa49c50d64316b583eb1c0c4f9ce2 | cinderv3 | volumev3 |
| c6474640f5d9424da0ec51c70c1e6e01 | nova_legacy | compute_legacy |
| db27eb8524be4db3be12b9dd0dab16b8 | keystone | identity |
| edf5c8b894a74a69b65bb49d8e014fff | cinder | volume |
+----------------------------------+-------------+----------------+
$ openstack volume service list
+------------------+-------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-------------------+------+---------+-------+----------------------------+
| cinder-scheduler | osdev02 | nova | enabled | up | 2018-08-27T11:33:27.000000 |
| cinder-volume | rbd:volumes@rbd-1 | nova | enabled | up | 2018-08-27T11:33:18.000000 |
| cinder-backup | osdev02 | nova | enabled | up | 2018-08-27T11:33:17.000000 |
+------------------+-------------------+------+---------+-------+----------------------------+
初始化环境
- 查看初始的
RBD
存储池情况,全部是空的:
$ rbd -p images ls
$ rbd -p volumes ls
$ rbd -p vms ls
- 设置环境变量,并初始化
OpenStack
环境:
$ . ${KOLLA_ROOT}/myconfig/admin-openrc.sh
$ ${KOLLA_ROOT}/myconfig/init-runonce
- 查看新增的镜像信息:
$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 293b25bb-30be-4839-b4e2-1dba3c43a56a | cirros | active |
+--------------------------------------+--------+--------+
$ openstack image show 293b25bb-30be-4839-b4e2-1dba3c43a56a
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2018-08-27T11:25:29Z |
| disk_format | qcow2 |
| file | /v2/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/file |
| id | 293b25bb-30be-4839-b4e2-1dba3c43a56a |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 68ada1726a864e2081a56be0a2dca3a0 |
| properties | locations=''[{u''url'': u''rbd://383237bd-becf-49d5-9bd6-deb0bc35ab2a/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/snap'', u''metadata'': {}}]'', os_type=''linux'' |
| protected | False |
| schema | /v2/schemas/image |
| size | 12716032 |
| status | active |
| tags | |
| updated_at | 2018-08-27T11:25:30Z |
| virtual_size | None |
| visibility | public |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
- 查看
RBD
存储池的变化,可见镜像被存储在images
存储池中,并且有一个快照:
$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
$ rbd -p vms ls
$ rbd -p images info 293b25bb-30be-4839-b4e2-1dba3c43a56a
rbd image ''293b25bb-30be-4839-b4e2-1dba3c43a56a'':
size 12 MiB in 2 objects
order 23 (8 MiB objects)
id: 178f4008d95
block_name_prefix: rbd_data.178f4008d95
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Mon Aug 27 19:25:29 2018
$ rbd -p images snap list 293b25bb-30be-4839-b4e2-1dba3c43a56a
SNAPID NAME SIZE TIMESTAMP
6 snap 12 MiB Mon Aug 27 19:25:30 2018
创建虚拟机
- 创建一个虚拟机:
$ openstack server create --image cirros --flavor m1.tiny --key-name mykey --nic net-id=9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 demo1
+-------------------------------------+-----------------------------------------------+
| Field | Value |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 65cVBJ7S6yaD |
| config_drive | |
| created | 2018-08-27T11:29:03Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 309f1364-4d58-413d-a865-dfc37ff04308 |
| image | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) |
| key_name | mykey |
| name | demo1 |
| progress | 0 |
| project_id | 68ada1726a864e2081a56be0a2dca3a0 |
| properties | |
| security_groups | name=''default'' |
| status | BUILD |
| updated | 2018-08-27T11:29:03Z |
| user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 |
| volumes_attached | |
+-------------------------------------+-----------------------------------------------+
$ openstack server show 309f1364-4d58-413d-a865-dfc37ff04308
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | osdev03 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | osdev03 |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2018-08-27T11:29:16.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | demo-net=10.0.0.11 |
| config_drive | |
| created | 2018-08-27T11:29:03Z |
| flavor | m1.tiny (1) |
| hostId | 4e345dd9f770f63f80d3eafe97c20d97746e890b2971a8398e26db86 |
| id | 309f1364-4d58-413d-a865-dfc37ff04308 |
| image | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) |
| key_name | mykey |
| name | demo1 |
| progress | 0 |
| project_id | 68ada1726a864e2081a56be0a2dca3a0 |
| properties | |
| security_groups | name=''default'' |
| status | ACTIVE |
| updated | 2018-08-27T11:29:16Z |
| user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------------+
- 可见虚拟机在
vms
存储池中创建了一个卷:
$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
$ rbd -p backups ls
$ rbd -p vms ls
309f1364-4d58-413d-a865-dfc37ff04308_disk
- 登录虚拟机所在节点,可以看到虚拟机的系统卷使用的是在
vms
中创建的这个卷,从进程参数可以看出qemu
直接使用的是Ceph
的librbd
库访问的RBD
块设备:
$ ssh osdev@osdev03
$ sudo docker exec -it nova_libvirt virsh list
Id Name State
----------------------------------------------------
1 instance-00000001 running
$ sudo docker exec -it nova_libvirt virsh dumpxml 1
...
<disk type=''network'' device=''disk''>
<driver name=''qemu'' type=''raw'' cache=''none''/>
<auth username=''nova''>
<secret type=''ceph'' uuid=''2ea5db42-c8f1-4601-927c-3c64426907aa''/>
</auth>
<source protocol=''rbd'' name=''vms/309f1364-4d58-413d-a865-dfc37ff04308_disk''>
<host name=''172.29.101.166'' port=''6789''/>
<host name=''172.29.101.167'' port=''6789''/>
<host name=''172.29.101.168'' port=''6789''/>
</source>
<target dev=''vda'' bus=''virtio''/>
<alias name=''virtio-disk0''/>
<address type=''pci'' domain=''0x0000'' bus=''0x00'' slot=''0x04'' function=''0x0''/>
</disk>
...
$ ps -aux | grep qemu
42436 2678909 4.6 0.0 1341144 171404 ? Sl 19:29 0:08 /usr/libexec/qemu-kvm -name guest=instance-00000001,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-instance-00000001/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client-IBRS,ss=on,hypervisor=on,tsc_adjust=on,avx512f=on,avx512dq=on,clflushopt=on,clwb=on,avx512cd=on,avx512bw=on,avx512vl=on,pku=on,stibp=on,pdpe1gb=on -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 309f1364-4d58-413d-a865-dfc37ff04308 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.2,serial=74bf926c-70b7-03df-b211-d21d6016081a,uuid=309f1364-4d58-413d-a865-dfc37ff04308,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-instance-00000001/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object secret,id=virtio-disk0-secret0,data=zNy84nlNYigA4vjbuOxcGQa1/hh8w28i/WoJbO1Xsl4=,keyid=masterKey0,iv=OhX+FApyFyq2XLWq0ff/Ew==,format=base64 -drive file=rbd:vms/309f1364-4d58-413d-a865-dfc37ff04308_disk:id=nova:auth_supported=cephx\;none:mon_host=172.29.101.166\:6789\;172.29.101.167\:6789\;172.29.101.168\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=79,id=hostnet0,vhost=on,vhostfd=80 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:04:e8:e9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0,logfile=/var/lib/nova/instances/309f1364-4d58-413d-a865-dfc37ff04308/console.log,logappend=off -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.29.101.168:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
$ ldd /usr/libexec/qemu-kvm | grep -e ceph -e rbd
librbd.so.1 => /lib64/librbd.so.1 (0x00007fde38815000)
libceph-common.so.0 => /usr/lib64/ceph/libceph-common.so.0 (0x00007fde28247000)
创建卷
- 创建一个卷:
$ openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-08-27T11:33:52.000000 |
| description | None |
| encrypted | False |
| id | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
| migration_status | None |
| multiattach | False |
| name | volume1 |
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 |
+---------------------+--------------------------------------+
- 查看存储池状态,可以看到新建的卷被放在
volumes
存储池:
$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9
$ rbd -p backups ls
$ rbd -p vms ls
309f1364-4d58-413d-a865-dfc37ff04308_disk
创建备份
- 创建一个卷备份,可以看到是创建在
backups
存储池中:
$ openstack volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | f2321578-88d5-4337-b93c-798855b817ce |
| name | None |
+-------+--------------------------------------+
$ openstack volume backup list
+--------------------------------------+------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+------+-------------+-----------+------+
| f2321578-88d5-4337-b93c-798855b817ce | None | None | available | 1 |
+--------------------------------------+------+-------------+-----------+------+
$ openstack volume backup show f2321578-88d5-4337-b93c-798855b817ce
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| availability_zone | nova |
| container | backups |
| created_at | 2018-08-27T11:39:40.000000 |
| data_timestamp | 2018-08-27T11:39:40.000000 |
| description | None |
| fail_reason | None |
| has_dependent_backups | False |
| id | f2321578-88d5-4337-b93c-798855b817ce |
| is_incremental | False |
| name | None |
| object_count | 0 |
| size | 1 |
| snapshot_id | None |
| status | available |
| updated_at | 2018-08-27T11:39:46.000000 |
| volume_id | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
+-----------------------+--------------------------------------+
$ rbd -p backups ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
- 在此创建一个备份,发现
backups
存储池并无变化,仅仅是在原有的备份卷中增加一个快照:
$ volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | 07132063-9bdb-4391-addd-a791dae2cfea |
| name | None |
+-------+--------------------------------------+
$ rbd -p backups ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
$ rbd -p backups snap list volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
SNAPID NAME SIZE TIMESTAMP
4 backup.f2321578-88d5-4337-b93c-798855b817ce.snap.1535369984.08 1 GiB Mon Aug 27 19:39:46 2018
5 backup.07132063-9bdb-4391-addd-a791dae2cfea.snap.1535370126.76 1 GiB Mon Aug 27 19:42:08 2018
连接卷
- 把新增的卷链接到之前创建的虚拟机中:
$ openstack server add volume demo1 volume1
$ openstack volume show volume1
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments | [{u''server_id'': u''309f1364-4d58-413d-a865-dfc37ff04308'', u''attachment_id'': u''fb4d9ec0-8a33-4ed0-8845-09e6f17aac81'', u''attached_at'': u''2018-08-27T11:44:51.000000'', u''host_name'': u''osdev03'', u''volume_id'': u''3ccca300-bee3-4b5a-b89b-32e6b8b806d9'', u''device'': u''/dev/vdb'', u''id'': u''3ccca300-bee3-4b5a-b89b-32e6b8b806d9''}] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-08-27T11:33:52.000000 |
| description | None |
| encrypted | False |
| id | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
| migration_status | None |
| multiattach | False |
| name | volume1 |
| os-vol-host-attr:host | rbd:volumes@rbd-1#rbd-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 68ada1726a864e2081a56be0a2dca3a0 |
| properties | attached_mode=''rw'' |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| type | None |
| updated_at | 2018-08-27T11:44:52.000000 |
| user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
- 到虚拟机所在节点查看其
libvirt
上参数的变化,发现新增了一个RBD
磁盘:
$ sudo docker exec -it nova_libvirt virsh dumpxml 1
...
<disk type=''network'' device=''disk''>
<driver name=''qemu'' type=''raw'' cache=''none''/>
<auth username=''nova''>
<secret type=''ceph'' uuid=''2ea5db42-c8f1-4601-927c-3c64426907aa''/>
</auth>
<source protocol=''rbd'' name=''vms/309f1364-4d58-413d-a865-dfc37ff04308_disk''>
<host name=''172.29.101.166'' port=''6789''/>
<host name=''172.29.101.167'' port=''6789''/>
<host name=''172.29.101.168'' port=''6789''/>
</source>
<target dev=''vda'' bus=''virtio''/>
<alias name=''virtio-disk0''/>
<address type=''pci'' domain=''0x0000'' bus=''0x00'' slot=''0x04'' function=''0x0''/>
</disk>
<disk type=''network'' device=''disk''>
<driver name=''qemu'' type=''raw'' cache=''none'' discard=''unmap''/>
<auth username=''cinder-volume''>
<secret type=''ceph'' uuid=''3fa55f7c-b556-4095-9253-b908d5408ec8''/>
</auth>
<source protocol=''rbd'' name=''volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9''>
<host name=''172.29.101.166'' port=''6789''/>
<host name=''172.29.101.167'' port=''6789''/>
<host name=''172.29.101.168'' port=''6789''/>
</source>
<target dev=''vdb'' bus=''virtio''/>
<serial>3ccca300-bee3-4b5a-b89b-32e6b8b806d9</serial>
<alias name=''virtio-disk1''/>
<address type=''pci'' domain=''0x0000'' bus=''0x00'' slot=''0x06'' function=''0x0''/>
</disk>
...
- 为虚拟机创建一个浮动
IP
,使用SSH
登陆进去:
$ openstack console url show demo1
+-------+-------------------------------------------------------------------------------------+
| Field | Value |
+-------+-------------------------------------------------------------------------------------+
| type | novnc |
| url | http://172.29.101.167:6080/vnc_auto.html?token=9f835216-1c53-41ae-849a-44a85429a334 |
+-------+-------------------------------------------------------------------------------------+
$ openstack floating ip create public1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2018-08-27T11:49:02Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 192.168.162.52 |
| floating_network_id | ff69b3ff-c2c4-4474-a7ba-952fa99df919 |
| id | 2aa86075-9c62-49f5-84ac-e7b6353c9591 |
| name | 192.168.162.52 |
| port_id | None |
| project_id | 68ada1726a864e2081a56be0a2dca3a0 |
| qos_policy_id | None |
| revision_number | 0 |
| router_id | None |
| status | DOWN |
| subnet_id | None |
| tags | [] |
| updated_at | 2018-08-27T11:49:02Z |
+---------------------+--------------------------------------+
$ openstack server add floating ip demo1 192.168.162.52
$ openstack server list
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
| 309f1364-4d58-413d-a865-dfc37ff04308 | demo1 | ACTIVE | demo-net=10.0.0.11, 192.168.162.52 | cirros | m1.tiny |
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
$ ssh root@osdev02
$ ip netns
qrouter-65759e60-6e20-41cc-a79c-fc492232b127 (id: 1)
qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 (id: 0)
$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ping 192.168.162.50
$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ping 10.0.0.9
(用户名"cirros",密码"gocubsgo")
$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ssh cirros@192.168.162.52
$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ssh cirros@10.0.0.11
$ sudo passwd root
Changing password for root
New password:
Bad password: too weak
Retype password:
Password for root changed by root
$ su -
Password:
- 创建分区并写入测试文件,最后卸载分区:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 1G 0 disk
|-vda1 253:1 0 1015M 0 part /
`-vda15 253:15 0 8M 0 part
vdb 253:16 0 1G 0 disk
# mkfs.ext4 /dev/vdb
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: ede8d366-bfbc-4b9a-9d3f-306104f410d7
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
# mount /dev/vdb /mnt
# df -h
Filesystem Size Used Available Use% Mounted on
/dev 240.1M 0 240.1M 0% /dev
/dev/vda1 978.9M 23.9M 914.1M 3% /
tmpfs 244.2M 0 244.2M 0% /dev/shm
tmpfs 244.2M 92.0K 244.1M 0% /run
/dev/vdb 975.9M 1.3M 907.4M 0% /mnt
# echo "hello openstack, volume test." > /mnt/ceph_rbd_test
# umount /mnt
# df -h
Filesystem Size Used Available Use% Mounted on
/dev 240.1M 0 240.1M 0% /dev
/dev/vda1 978.9M 23.9M 914.1M 3% /
tmpfs 244.2M 0 244.2M 0% /dev/shm
tmpfs 244.2M 92.0K 244.1M 0% /run
断开卷
- 断开卷,同时查看虚拟机内部变化:
$ openstack server remove volume demo1 volume1
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 1G 0 disk
|-vda1 253:1 0 1015M 0 part /
`-vda15 253:15 0 8M 0 part
- 在宿主机映射和挂载
RBD
卷,并查看之前虚拟机内部创建的文件,完全相同:
$ rbd showmapped
id pool image snap device
0 rbd rbd_test - /dev/rbd0
$ rbd feature disable volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 object-map fast-diff deep-flatten
$ rbd map volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9
/dev/rbd1
$ mkdir /mnt/volume1
$ mount /dev/rbd1 /mnt/volume1/
$ cat /mnt/volume1/
ceph_rbd_test lost+found/
$ cat /mnt/volume1/ceph_rbd_test
hello openstack, volume test.
参考文档
- External Ceph
Ceph 文件系统-全网最炫酷的Ceph Dashboard页面和Ceph监控 -- <5>
Ceph Dashboard实现
Ceph Dashboard介绍
Ceph 的监控可视化界面方案很多----grafana、Kraken。但是从Luminous开始,Ceph 提供了原生的Dashboard功能,通过Dashboard可以获取Ceph集群的各种基本状态信息。 mimic版 (nautilus版) dashboard 安装。如果是 (nautilus版) 需要安装 ceph-mgr-dashboard
配置Ceph Dashboard
1、在每个mgr节点安装
# yum install ceph-mgr-dashboard
2、开启mgr功能
# ceph mgr module enable dashboard
3、生成并安装自签名的证书
# ceph dashboard create-self-signed-cert
4、创建一个dashboard登录用户名密码
# ceph dashboard ac-user-create guest 1q2w3e4r administrator
5、查看服务访问方式
# ceph mgr services
配置好之后就可以登录,页面如下:
<img src="https://raw.githubusercontent.com/PassZhang/passzhang.github.io/images-picgo/20200101134137.png" alt="登录界面"/>
<img src="https://raw.githubusercontent.com/PassZhang/passzhang.github.io/images-picgo/20200101134252.png" alt="主界面"/>
<img src="https://raw.githubusercontent.com/PassZhang/passzhang.github.io/images-picgo/20200101134441.png"/>
修改默认配置命令
指定集群dashboard的访问端口
# ceph config-key set mgr/dashboard/server_port 7000
指定集群 dashboard的访问IP
# ceph config-key set mgr/dashboard/server_addr $IP
开启Object Gateway管理功能
1、创建rgw用户
# radosgw-admin user info --uid=user01
2、提供Dashboard证书
# ceph dashboard set-rgw-api-access-key $access_key
# ceph dashboard set-rgw-api-secret-key $secret_key
3、配置rgw主机名和端口
# ceph dashboard set-rgw-api-host 192.168.25.224
4、刷新web页面
安装grafana
1、配置yum源文件
# vim /etc/yum.repos.d/grafana.repo
[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
2.通过yum命令安装grafana
# yum -y install grafana
3.启动grafana并设为开机自启
# systemctl start grafana-server.service
# systemctl enable grafana-server.service
安装promethus
1、下载安装包,下载地址
https://prometheus.io/download/
2、解压压缩包
# tar fvxz prometheus-2.14.0.linux-amd64.tar.gz
3、将解压后的目录改名
# mv prometheus-2.13.1.linux-amd64 /opt/prometheus
4、查看promethus版本
# ./prometheus --version
5、配置系统服务启动
# vim /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus Monitoring System
Documentation=Prometheus Monitoring System
[Service]
ExecStart=/opt/prometheus/prometheus \
--config.file /opt/prometheus/prometheus.yml \
--web.listen-address=:9090
[Install]
WantedBy=multi-user.target
6、加载系统服务
# systemctl daemon-reload
7、启动服务和添加开机自启动
# systemctl start prometheus
# systemctl enable prometheus
ceph mgr prometheus插件配置
# ceph mgr module enable prometheus
# netstat -nltp | grep mgr 检查端口
# curl 127.0.0.1:9283/metrics 测试返回值
配置promethus
1、在 scrape_configs: 配置项下添加
vim prometheus.yml
- job_name: ''ceph_cluster''
honor_labels: true
scrape_interval: 5s
static_configs:
- targets: [''192.168.25.224:9283'']
labels:
instance: ceph
2、重启promethus服务
# systemctl restart prometheus
3、检查prometheus服务器中是否添加成功
# 浏览器-》 http://x.x.x.x:9090 -》status -》Targets
配置grafana
1、浏览器登录 grafana 管理界面
2、添加data sources,点击configuration--》data sources
3、添加dashboard,点击HOME--》find dashboard on grafana.com
4、搜索ceph的dashboard
5、点击HOME--》Import dashboard, 选择合适的dashboard,记录编号
原文出处:https://www.cnblogs.com/passzhang/p/12179816.html
ceph0.80 安装及使用(CentOS7/ceph-deploy)
Ceph 的主要目标是设计成基于 POSIX 的没有单点故障的分布式文件系统,使数据能容错和无缝的复制。详见:http://www.oschina.net/p/ceph
目前大部分部署 ceph 基本都是在 Ubuntu,因为它的内核默认打开了 Ceph_fs。选择 CentOS7(默认文件系统是 XFS,而不是 EXT4)作为部署平台时则需要留意更多的信息,比如是用客户端加载 ceph 文件系统时。
看过不少网上的文章,大都不太适合 0.80,或是可被省略的步骤。比如配置 ceph.conf。所以特意做了几遍的安装,总结此文。另外吐嘈下 Redhat,收购了 Ceph 所在的公司 Inktank 还发布了自己的版本($1000/cluster),居然不在最新的内核中将 Ceph_fs 打开,导致很多人直接倒向 Ubuntu。
一、准备主机环境:
主机名 | IP | 角色 |
OS |
ceph0 | 10.9.16.96 | MON, MDS | CentOS7 |
ceph1 | 10.9.16.97 | MON,OSD | CentOS7 |
ceph2 | 10.9.16.98 | OSD,MDS | CentOS7 |
ceph3 | 10.9.16.99 | OSD,MDS | CentOS7 |
ceph4 | 10.9.16.100 | MON | CentOS7 |
client0 | 10.9.16.89 | client | CentOS7 (内核 3.16.2) |
client1 | 10.9.16.95 | client | Ubuntu14.04 |
部署建议说明:
MON 节点建议使用 3 个,OSD 数据节点最好与操作系统分开以提高性能,有至少两个千兆网卡(这里只显示集群内的 IP,客户端访问 IP 略)
二、准备工作(注:用 ceph-deploy 可直接安装 ceph,也可以用 yum 另行安装)
确认每台机器的主机名正确(CentOS7 中,只要更改 /etc/hostname 即可,比旧版本方便)
每台机器上加入对应的 IP / 主机名到 /etc/hosts;
每台机器使用 ssh-copy-id 完成这些服务器之间免 ssh 密码登录;(发现 ansible 好用了)
关闭防火墙(systemctl stop firewalld.service)或打开 6789/6800~6900 端口;
编辑 /etc/ntp.conf,开启时间服务同步时间;(crontab/ntpdate 不靠谱,不另作说明)
确认已经配置 epel/remi 的 repo 软件包;在 client0 上配置 elrepo 软件包以便 yum 升级内核
在所有的 OSD 服务器上,初始化目录,比如 ceph1 建立文件夹 /var/local/osd1,ceph2 上对应 /var/local/osd2
三、开始安装
(以下非特别说明,都是在 ceph0 上操作)
生成 MON 信息:ceph-deploy new ceph {0,1,4}
安装 ceph:ceph-deploy install ceph0 ceph1 ceph2 ceph3 ceph4(注:如果已经用 yum 在每台机器上安装了 ceph,这步可省略)
生成 keys:ceph-deploy --overwrite-conf mon create-initial
准备 OSD 服务器:ceph-deploy --overwrite-conf osd prepare ceph1:/var/local/osd1 ceph2:/var/local/osd2 ceph3:/var/local/osd3
激活 OSD:ceph-deploy osd activate ceph1:/var/local/osd1 ceph2:/var/local/osd2 ceph3:/var/local/osd3
复制 key 到各个节点:ceph-deploy admin ceph0 ceph1 ceph2 ceph3 ceph4
检查是否 ok:ceph health。
安装 MDS 节点:ceph-deploy mds create ceph0 ceph2 ceph3
检查状态:
[root@ceph0 ~]# ceph -s cluster 9ddc0226-574d-4e8e-8ff4-bbe9cd838e21 health HEALTH_OK monmap e1: 2 mons at {ceph0=10.9.16.96:6789/0,ceph1=10.9.16.97:6789/0,ceph4=10.9.16.100:6789/0}, election epoch 4, quorum 0,1 ceph0,ceph1 mdsmap e5: 1/1/1 up {0=ceph0=up:active}, 1 up:standby osdmap e13: 3 osds: 3 up, 3 in pgmap v6312: 192 pgs, 3 pools, 1075 MB data, 512 objects 21671 MB used, 32082 MB / 53754 MB avail 192 active+clean
四、挂载问题:
client0 的 CentOS7 默认没有开启 ceph_fs 的内核,需要更改内核,这里直接用 yum 更新(可以手工编译):
yum --enablerepo=elrepo-kernel install kernel-ml
grub2-set-default 0
mkdir /mnt/cephfs
mount -t ceph 10.9.16.96:6789,10.9.16.97:6789:/ /mnt/cephfs -o name=admin,secret=AQDnDBhUWGS6GhAARV0CjHB*******Y1LQzQ==
#这里的密钥,是 ceph.client.admin.keyring 中的内容。
#以下是 /etc/fstab 的自动加载内容:
10.9.16.96:6789,10.9.16.97:6789:/ /mnt/ceph ceph name=admin,secret=AQDnDBhUWGS6GhAARV0CjHB*******Y1LQzQ==,noatime 0 0
用 Ubuntu14.04 的命令是一样的,加载。
在复制文件时,用 ceph -s 可以实时查看到底下有一个文件读 / 写速度,如:client io 12515 kB/s wr, 3 op/s
不过这个读写速度是 ceph 内部(包括不同服务器之间的复制)的速度,而不是单纯客户端到服务器商的速度。
看看是不是已经正常用了。
五、安装结语:
并不是与网上的大多数教程写的那样一定要编辑 ceph.conf 文件。而应该是在特定需求环境下才去改。
要配置集群内网和访问外网的网络,以提高网络负载效率和可能的 DDOS,可把下面的选项加到 ceph.conf 里的 [global] 段下。
[global] public network {public-network-ip-address/netmask} cluster network {enter cluster-network-ip-address/netmask}
ceph 的 osd journal size 默认值是 0,所以你得在 ceph.conf 里设置,日志尺寸应该至少 2 倍于 filestore min sync interval 的值和预计吞吐量的乘积:osd journal size = {2 * (expected throughput * filestore min sync interval)} 例如:osd journal size = 10000(是 10G)
元变量将展开为实际的集群名和进程名,例如如果集群名是 ceph(默认值),你可以用下面的命令检索 osd.0 的配置:ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | less
使用其他端口加载方法:mount.ceph monhost1:7000,monhost2:7000,monhost3:7000://mnt/foo
其他内容(如:增加 / 删除节点,块设备等,后续再写)
Cephfs+Samba构建基于Ceph的文件共享服务
Ceph分布式存储使用Samba服务将CephFS文件存储导出成Samba协议.支持Windows和Linux MacOS等访问文件共享
2 环境
3 安装Samba
下载samba rpm安装包:
yum -y install smaba samba-client samba-common
4 创建samba用户
groupadd samba
useradd samba -d /home/samba -g smb -s /sbin/nologin
smbpasswd -a samba
5 源码编译vfs_ceph模块
下载tar -zxvf samba-4.8.3.tar.gz
yum -y install lmdb python36 python36-devel lmdb-devel gnutls-devel gpgme-devel python-gpgme jansson-devel libarchive-devel libacl-devel pam-devel
./configure
Make
cd bin/default/source3/modules/
cp -a libvfs_module_ceph.so /usr/lib64/samba/vfs/
6 配置Ceph
创建CephFs samba.gw账号
ceph auth get-or-create client.samba.gw mon ‘allow r‘ \
osd ‘allow ‘ mds ‘allow ‘ -o ceph.client.samba.gw.keyring
将密钥拷贝到/etc/ceph
cp ceph.client.samba.gw.keyring /etc/ceph/
7 配置Samba
8 启动服务
systemctl start smb.service
systemctl enable smb.service
systemctl start nmb.service
systemctl enable nmb.service
9 Linux挂载
Linux客户端安装yum -y install cifs-utils
mount.cifs //IP/share /mnt/share -o username=xxx,password=xxx
10 Windows挂载
11 故障场景:
11.1 Windows下访问共享目录没有删除和创建目录文件权限
意思是在IP上采用内核方式挂载CephFS到本地mount -t ceph IP:/ /mnt/cephfs/chmod 777 -R /mnt/cephfs/umount /mnt/cephfs
关于【Ceph浅析笔记】Ceph的逻辑结构和ceph原理架构的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于Ceph 学习笔记 2 - 在 Kolla-Ansible 中使用 Ceph 后端存储、Ceph 文件系统-全网最炫酷的Ceph Dashboard页面和Ceph监控 -- <5>、ceph0.80 安装及使用(CentOS7/ceph-deploy)、Cephfs+Samba构建基于Ceph的文件共享服务的相关信息,请在本站寻找。
本文标签: