在Kubernetes Pod中使用Ceph RBD Volume
2017-01-18
Ceph集群上的操作 #
创建Ceph Pool #
Ceph集群安装完成后,默认的Pool是rbd:
1ceph osd lspools
20 rbd,
一个Ceph集群可以有多个pool,pool是逻辑上的存储池。不同的pool可以有不一样的数据处理方式,例如replica size, placement groups, crush rules,snapshot等等。
下面创建kube pool专门给Kubernetes集群使用,因为测试环境ceph集群只有两个osd,所以设置pg_num为128:
1ceph osd pool create kube 128
2
3ceph osd lspools
40 rbd,2 kube,
创建的rbd镜像 #
接下来创建rbd镜像,Kubernetes的Pod中将使用这个image。
1rbd create kube/myimage --size 1024
上面的命令创建了一个1G大小的image。使用rbd list
打印kube pool下的镜像:
1rbd list kube
2myimage
将kube/myimage这个image映射到内核:
1rbd map myimage --pool kube
2rbd: sysfs write failed
3RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
4In some cases useful info is found in syslog - try "dmesg | tail" or so.
5rbd: map failed: (6) No such device or address
意思是当前系统内核不支持rbd image的feature,查看一下myimage的信息:
1rbd info myimage --pool kube
2rbd image 'myimage':
3 size 1024 MB in 256 objects
4 order 22 (4096 kB objects)
5 block_name_prefix: rbd_data.104274b0dc51
6 format: 2
7 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
8 flags:
features一行显示了很多的feature,下面手动把layering之外的features都禁用掉:
1rbd feature disable myimage exclusive-lock, object-map, fast-diff, deep-flatten --pool kube
再次查看:
1rbd info myimage --pool kube
2rbd image 'myimage':
3 size 1024 MB in 256 objects
4 order 22 (4096 kB objects)
5 block_name_prefix: rbd_data.104274b0dc51
6 format: 2
7 features: layering
8 flags:
重新尝试将kube/myimage这个image映射到内核:
1rbd map myimage --pool kube
2/dev/rbd0
接下来格式这个image:
1mkfs.ext4 /dev/rbd0
删除一个image的步骤如下:
1rbd unmap myimage --pool kube
2rbd rm myimage --pool kube
Kubernetes上的操作 #
安装ceph-common #
为了让Kubernetes的Node可以调用rbd,需要在Kubernetes的Node上安装ceph-common:
1yum install -y ceph-common
创建Ceph Secret资源对象 #
接下来在Kubernetes上创建ceph-secret,这个Secret将用于Kubernetes Volume访问Ceph集群。
查看ceph集群client.admin的keyring值:
1ceph auth get-key client.admin
2AQA2WsVYyv7RBRAA0TBjCztSO5xg8Ungx5MKzQ==
因为Kubernetes的Secret需要Base64编码,下面将这个keyring转换成Base64编码:
1ceph auth get-key client.admin | base64
2QVFBMldzVll5djdSQlJBQTBUQmpDenRTTzV4ZzhVbmd4NU1LelE9PQ==
创建ceph-secret.yml文件:
1apiVersion: v1
2kind: Secret
3metadata:
4 name: ceph-secret
5data:
6 key: QVFBMldzVll5djdSQlJBQTBUQmpDenRTTzV4ZzhVbmd4NU1LelE9PQ==
在Kubernetes上创建ceph-secret:
1kubectl create -f ceph-secret.yml
2
3kubectl get secret
4NAME TYPE DATA AGE
5ceph-secret Opaque 1 45s
创建使用rbd作为Volume的Pod #
下载Kubernetes Github官网rbd示例的rbd-with-secret.json
1wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/volumes/rbd/rbd-with-secret.json
这个文件是一个name为rbd2的Pod的示例,这个Pod中定义了基于ceph rbd的Volume:
1{
2 "apiVersion": "v1",
3 "kind": "Pod",
4 "metadata": {
5 "name": "rbd2"
6 },
7 "spec": {
8 "containers": [
9 {
10 "name": "rbd-rw",
11 "image": "kubernetes/pause",
12 "volumeMounts": [
13 {
14 "mountPath": "/mnt/rbd",
15 "name": "rbdpd"
16 }
17 ]
18 }
19 ],
20 "volumes": [
21 {
22 "name": "rbdpd",
23 "rbd": {
24 "monitors": [
25 "10.16.154.78:6789",
26 "10.16.154.82:6789",
27 "10.16.154.83:6789"
28 ],
29 "pool": "kube",
30 "image": "foo",
31 "user": "admin",
32 "secretRef": {
33 "name": "ceph-secret"
34 },
35 "fsType": "ext4",
36 "readOnly": true
37 }
38 }
39 ]
40 }
41}
volumes中定义了一个名称为rbdpd的volume,其中rbd字段下的几个字段:
- monitors: 指定Ceph集群中Monitor组件, 试验环境的Ceph集群只有一个Monitor,这里修改为:“192.168.61.31:6789”
- pool: 指定volume所用Ceph的pool,这里值是kube,就是使用我们前面创建的pool
- image: 指定rbd镜像,我们前面在kube pool下创建了myimage镜像,这里修改为"myimage"
- user: 指定ceph的用户为admin,对应ceph.client.admin.keyring
- secret.Ref: 表示这个volume使用名称为ceph-secret的Secret
修改后的rbd-with-secret.json如下:
1......
2 "volumes": [
3 {
4 "name": "rbdpd",
5 "rbd": {
6 "monitors": [
7 "192.168.61.31:6789"
8 ],
9 "pool": "kube",
10 "image": "myimage",
11 "user": "admin",
12 "secretRef": {
13 "name": "ceph-secret"
14 },
15 "fsType": "ext4",
16 "readOnly": true
17 }
18 }
19 ]
20......
创建这个Pod:
1kubectl create -f rbd-with-secret.json
2
3kubectl get pods
4NAME READY STATUS RESTARTS AGE
5rbd 1/1 Running 0 51s