在Kubernetes Pod中使用Ceph RBD Volume

2017-01-18 阅读: Kubernetes Ceph

Ceph集群上的操作

创建Ceph Pool

Ceph集群安装完成后,默认的Pool是rbd:

ceph osd lspools
0 rbd,

一个Ceph集群可以有多个pool,pool是逻辑上的存储池。不同的pool可以有不一样的数据处理方式,例如replica size, placement groups, crush rules,snapshot等等。

下面创建kube pool专门给Kubernetes集群使用,因为测试环境ceph集群只有两个osd,所以设置pg_num为128:

ceph osd pool create kube 128

ceph osd lspools
0 rbd,2 kube,

创建的rbd镜像

接下来创建rbd镜像,Kubernetes的Pod中将使用这个image。

rbd create kube/myimage --size 1024

上面的命令创建了一个1G大小的image。使用rbd list打印kube pool下的镜像:

rbd list kube
myimage

将kube/myimage这个image映射到内核:

rbd map myimage --pool kube
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (6) No such device or address

意思是当前系统内核不支持rbd image的feature,查看一下myimage的信息:

rbd info myimage --pool kube
rbd image 'myimage':
        size 1024 MB in 256 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.104274b0dc51
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        flags:

features一行显示了很多的feature,下面手动把layering之外的features都禁用掉:

rbd feature disable myimage exclusive-lock, object-map, fast-diff, deep-flatten --pool kube

再次查看:

rbd info myimage --pool kube
rbd image 'myimage':
        size 1024 MB in 256 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.104274b0dc51
        format: 2
        features: layering
        flags:

重新尝试将kube/myimage这个image映射到内核:

rbd map myimage --pool kube
/dev/rbd0

接下来格式这个image:

mkfs.ext4 /dev/rbd0

删除一个image的步骤如下:

  rbd unmap myimage --pool kube
  rbd rm myimage --pool kube

Kubernetes上的操作

安装ceph-common

为了让Kubernetes的Node可以调用rbd,需要在Kubernetes的Node上安装ceph-common:

yum install -y ceph-common

创建Ceph Secret资源对象

接下来在Kubernetes上创建ceph-secret,这个Secret将用于Kubernetes Volume访问Ceph集群。

查看ceph集群client.admin的keyring值:

ceph auth get-key client.admin
AQA2WsVYyv7RBRAA0TBjCztSO5xg8Ungx5MKzQ==

因为Kubernetes的Secret需要Base64编码,下面将这个keyring转换成Base64编码:

ceph auth get-key client.admin | base64
QVFBMldzVll5djdSQlJBQTBUQmpDenRTTzV4ZzhVbmd4NU1LelE9PQ==

创建ceph-secret.yml文件:

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFBMldzVll5djdSQlJBQTBUQmpDenRTTzV4ZzhVbmd4NU1LelE9PQ==

在Kubernetes上创建ceph-secret:

kubectl create -f ceph-secret.yml

kubectl get secret
NAME                  TYPE                                  DATA      AGE
ceph-secret           Opaque                                1         45s

创建使用rbd作为Volume的Pod

下载Kubernetes Github官网rbd示例的rbd-with-secret.json

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/volumes/rbd/rbd-with-secret.json

这个文件是一个name为rbd2的Pod的示例,这个Pod中定义了基于ceph rbd的Volume:

{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "name": "rbd2"
    },
    "spec": {
        "containers": [
            {
                "name": "rbd-rw",
                "image": "kubernetes/pause",
                "volumeMounts": [
                    {
                        "mountPath": "/mnt/rbd",
                        "name": "rbdpd"
                    }
                ]
            }
        ],
        "volumes": [
            {
                "name": "rbdpd",
                "rbd": {
                    "monitors": [
                                                        "10.16.154.78:6789",
                                                        "10.16.154.82:6789",
                                                        "10.16.154.83:6789"
                                 ],
                    "pool": "kube",
                    "image": "foo",
                    "user": "admin",
                    "secretRef": {
                                                  "name": "ceph-secret"
                                         },
                    "fsType": "ext4",
                    "readOnly": true
                }
            }
        ]
    }
}

volumes中定义了一个名称为rbdpd的volume,其中rbd字段下的几个字段:

  • monitors: 指定Ceph集群中Monitor组件, 试验环境的Ceph集群只有一个Monitor,这里修改为:”192.168.61.31:6789”
  • pool: 指定volume所用Ceph的pool,这里值是kube,就是使用我们前面创建的pool
  • image: 指定rbd镜像,我们前面在kube pool下创建了myimage镜像,这里修改为”myimage”
  • user: 指定ceph的用户为admin,对应ceph.client.admin.keyring
  • secret.Ref: 表示这个volume使用名称为ceph-secret的Secret

修改后的rbd-with-secret.json如下:

......
 "volumes": [
            {
                "name": "rbdpd",
                "rbd": {
                    "monitors": [
                                                        "192.168.61.31:6789"
                                 ],
                    "pool": "kube",
                    "image": "myimage",
                    "user": "admin",
                    "secretRef": {
                                                  "name": "ceph-secret"
                                         },
                    "fsType": "ext4",
                    "readOnly": true
                }
            }
        ]
......

创建这个Pod:

kubectl create -f rbd-with-secret.json

kubectl get pods
NAME                    READY     STATUS     RESTARTS   AGE
rbd                     1/1       Running    0          51s

参考

标题:在Kubernetes Pod中使用Ceph RBD Volume
本文链接:https://blog.frognew.com/2017/01/kubernetes-volume-with-ceph-rbd.html
转载请注明出处。

目录