Ceph Kraken 11.2.0部署记录

Ceph Kraken 11.2.0部署记录

2017-04-06
Ceph

本文记录在测试环境中部署Ceph Kraken的完整过程。 我们的线上环境主要使用Ceph的块存储RBD作为Kubernetes的存储卷,同时使用Ceph的对象存储RGW作为各种服务的对象存储,最后对这两种使用场景做一个整理。

环境准备 #

1192.168.61.41 node1 - admin-node, deploy-node, mon, osd.0
2192.168.61.42 node2 - mon, osd.1
3192.168.61.43 node3 - mon, osd.2

在node1上配置Ceph yum源 /etc/yum.repos.d/ceph.repo, 根据GET PACKAGES选择kraken的地址:

1[ceph-noarch]
2name=Ceph noarch packages
3baseurl=https://download.ceph.com/rpm-kraken/el7/noarch
4enabled=1
5priority=2
6gpgcheck=1
7gpgkey=https://download.ceph.com/keys/release.asc

安装ceph-deploy:

1yum install ceph-deploy

在各节点上创建部署用户sdsceph,设置一个密码,并确保该用户具有sudo权限:

1useradd -d /home/sdsceph -m sdsceph
2passwd sdsceph
3
4echo "sdsceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/sdsceph
5chmod 0440 /etc/sudoers.d/sdsceph

禁用 requiretty,修改各节点sdsceph用户不需要控制终端,visudo找到Defaults requiretty修改成Defaults:sdsceph !requiretty

配置node1的sdsceph用户到各节点无密码登录,直接回车保持密码为空:

1su sdsceph
2ssh-keygen

将key拷贝到node1,node2,node3:

1ssh-copy-id sdsceph@node1
2ssh-copy-id sdsceph@node2
3ssh-copy-id sdsceph@node3

修改node1上~/.ssh/config文件,设置当不指定用户时登录到node2, node3的用户为sdsceph:

1Host node1
2   Hostname node1
3   User sdsceph
4Host node2
5   Hostname node2
6   User sdsceph
7Host node3
8   Hostname node3
9   User sdsceph

集群初始化 #

创建集群 #

使用ceph-deploy创建集群,在此过程中会生成一些配置文件,因此可以先在c0上创建一个目录ceph-cluster,并进入到这个目录中:

1su sdsceph
2mkdir ~/ceph-cluster
3cd ~/ceph-cluster

执行命令ceph-deploy new {initial-monitor-node(s)}将创建一个名称ceph的ceph cluster,node1~node3每个节点上都有一个MON节点:

1ceph-deploy new node1 node2 node3

ceph-cluster目录下会生成下面3个文件:

1ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

安装Ceph软件包 #

从node1执行命令将ceph安装到各个节点:

1su sdsceph
2cd ~/ceph-cluster
3ceph-deploy install node1 node2 node3 --release=kraken

配置并启动MON节点 #

配置初始化并启动Ceph MON节点,收集所有密钥:

1ceph-deploy mon create-initial

执行成功后当前目录下会出现以下key:

1ceph.bootstrap-mds.keyring
2ceph.bootstrap-osd.keyring
3ceph.bootstrap-rgw.keyring
4ceph.client.admin.keyring

在各个节点上在ps -ef | grep ceph-mon可以看到Ceph MON进程已经启动。

下面将Ceph MON做成开机启动,分别在各节点执行:

1sudo systemctl enable ceph-mon.target

添加并启动OSD节点 #

这个实验环境的node1~node3上各有一个没有分区和格式化的裸盘/dev/sdb,我们将使用这几个盘创建OSD。

1sudo fdisk -l
2
3Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
4Units = sectors of 1 * 512 = 512 bytes
5Sector size (logical/physical): 512 bytes / 512 bytes
6I/O size (minimum/optimal): 512 bytes / 512 bytes

初始化磁盘:

1ceph-deploy disk zap node1:sdb
2ceph-deploy disk zap node2:sdb
3ceph-deploy disk zap node3:sdb

准备OSD,数据和日志在同一个磁盘上:

1ceph-deploy osd prepare node1:sdb
2ceph-deploy osd prepare node2:sdb
3ceph-deploy osd prepare node3:sdb
 1sudo fdisk -l
 2
 3Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
 4Units = sectors of 1 * 512 = 512 bytes
 5Sector size (logical/physical): 512 bytes / 512 bytes
 6I/O size (minimum/optimal): 512 bytes / 512 bytes
 7Disk label type: gpt
 8
 9
10#         Start          End    Size  Type            Name
11 1     10487808    209715166     95G  unknown         ceph data
12 2         2048     10487807      5G  unknown         ceph journal
1sudo df -h | grep sdb
2/dev/sdb1                 95G   33M   95G   1% /var/lib/ceph/tmp/mnt.nyHIPm

激活osd:

1ceph-deploy osd activate node1:sdb1:sdb2
2ceph-deploy osd activate node2:sdb1:sdb2
3ceph-deploy osd activate node3:sdb1:sdb2

在各节点上执行下面的命令,将Ceph OSD做成开机启动:

1sudo systemctl enable ceph-mon.target
2sudo systemctl enable ceph.target

集群状态查看 #

分发admin密钥,将admin密钥拷贝到各节点,这样每次执行ceph命令行时就无需指定monitor地址和 ceph.client.admin.keyring了:

1ceph-deploy admin node1 node2 node3

赋予 ceph.client.admin.keyring读权限:

1sudo chmod +r /etc/ceph/ceph.client.admin.keyring

查看ceph集群中OSD节点的状态:

1ceph osd tree
2ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
3-1 0.27809 root default
4-2 0.09270     host node1
5 0 0.09270         osd.0       up  1.00000          1.00000
6-3 0.09270     host node2
7 1 0.09270         osd.1       up  1.00000          1.00000
8-4 0.09270     host node3
9 2 0.09270         osd.2       up  1.00000          1.00000

查看集群健康状态:

1ceph health
2HEALTH_OK

使用Ceph RBD块存储 #

因为Ceph的高版本在将块设备镜像映射到内核时,Ceph会在创建image时增加许多feature,这些feature都需要内核支持,而CentOS 7内核上支持有限,所以在映射镜像时会出现RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".的错误。 我们之前在Ceph块存储之RBD中是手动对具体的镜像禁用当前系统内核不支持的feature,然后再将其映射到内核。 这次为了全局禁用这些特性即使得在创建rbd image时就禁用这些特性,我们直接修改ceph.conf,加入:

1rbd_default_features = 1
  • 上面配置rbd_default_features = 1 来设置默认 features,数值1是 layering特性的 bit码所对应的整数值

接下来将ceph.conf推送到各个节点的/etc/ceph/ceph.conf下:

1ceph-deploy --overwrite-conf config push node1 node2 node3

当前我们对的Ceph RBD的使用情况是创建单独的存储池pool,并在其下面创建RBD镜像,作为Kubernetes集群的Persistent Volume 。 这块可参看前面的文章:

使用Ceph RGW对象存储 #

接下来继续在实验环境中部署Ceph RGW服务。这里使用的是civetweb的方式,nginx的方式可参考之前的文章Ceph对象存储之RGW

1192.168.61.41 node1 - admin-node, deploy-node, mon, osd.0, rgw
2192.168.61.42 node2 - mon, osd.1, rgw
3192.168.61.43 node3 - mon, osd.2, rgw

在各RGW节点安装Ceph RGW软件包:

1ceph-deploy install --rgw  --release=kraken node1 node2 node3

报下面对错误:

1file /etc/yum.repos.d/ceph.repo from install of ceph-release-1-1.el7.noarch conflicts with file from package ceph-release-1-1.el7.noarch
1yum remove ceph-release-1-1.el7.noarch

重新安装:

1ceph-deploy install --rgw  --release=kraken node1 node2 node3

启动RGW实例:

1ceph-deploy rgw create node1 node2 node3

在各节点执行将Ceph RGW做成开机启动:

1sudo systemctl enable ceph-radosgw.target
2sudo systemctl enable ceph.target

RGW服务在各个节点启动并默认监听7480端口:

1sudo netstat -nltp | grep radosgw
2tcp        0      0 0.0.0.0:7480            0.0.0.0:*               LISTEN      1066/radosgw
1curl node1:7480
2<?xml version="1.0" encoding="UTF-8"?>
3<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
4	<Owner>
5		<ID>anonymous</ID>
6		<DisplayName></DisplayName>
7	</Owner>
8	<Buckets></Buckets>
9</ListAllMyBucketsResult>

创建一个S3用户:

 1radosgw-admin user create --uid=oper --display-name=oper [email protected]
 2
 3{
 4    "user_id": "oper",
 5    "display_name": "oper",
 6    "email": "[email protected]",
 7    "suspended": 0,
 8    "max_buckets": 1000,
 9    "auid": 0,
10    "subusers": [],
11    "keys": [
12        {
13            "user": "oper",
14            "access_key": "JGD1S199DEMTQVMP435P",
15            "secret_key": "iaw2K9BHowvvyrFBGRUTrNJgw2E9eE7qZLcIO7vJ"
16        }
17    ],
18    "swift_keys": [],
19    "caps": [],
20    "op_mask": "read, write, delete",
21    "default_placement": "",
22    "placement_tags": [],
23    "bucket_quota": {
24        "enabled": false,
25        "check_on_raw": false,
26        "max_size": -1,
27        "max_size_kb": 0,
28        "max_objects": -1
29    },
30    "user_quota": {
31        "enabled": false,
32        "check_on_raw": false,
33        "max_size": -1,
34        "max_size_kb": 0,
35        "max_objects": -1
36    },
37    "temp_url_keys": [],
38    "type": "rgw"
39}

在node1上安装s3cmd:

1sudo yum install -y s3cmd

接下来配置s3cmd, 指定前面创建的oper用户的Access Key和Secret Key:

 1s3cmd --configure
 2Enter new values or accept defaults in brackets with Enter.
 3Refer to user manual for detailed description of all options.
 4
 5Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
 6Access Key: JGD1S199DEMTQVMP435P
 7Secret Key: iaw2K9BHowvvyrFBGRUTrNJgw2E9eE7qZLcIO7vJ
 8Default Region [US]:
 9
10Encryption password is used to protect your files from reading
11by unauthorized persons while in transfer to S3
12Encryption password:
13Path to GPG program [/usr/bin/gpg]:
14
15When using secure HTTPS protocol all communication with Amazon S3
16servers is protected from 3rd party eavesdropping. This method is
17slower than plain HTTP, and can only be proxied with Python 2.7 or newer
18Use HTTPS protocol [Yes]: No
19
20On some networks all internet access must go through a HTTP proxy.
21Try setting it here if you can't connect to S3 directly
22HTTP Proxy server name:
23
24New settings:
25  Access Key: JGD1S199DEMTQVMP435P
26  Secret Key: iaw2K9BHowvvyrFBGRUTrNJgw2E9eE7qZLcIO7vJ
27  Default Region: US
28  Encryption password:
29  Path to GPG program: /usr/bin/gpg
30  Use HTTPS protocol: False
31  HTTP Proxy server name:
32  HTTP Proxy server port: 0
33
34Test access with supplied credentials? [Y/n] n
35
36Save settings? [y/N] y
37Configuration saved to '/home/sdsceph/.s3cfg'

因为当用户访问S3的Bucket中的数据时,通常情况下需要借助域名,有的S3客户端可能需要将相关Bucket的资源关联到具体的域名上才能正常使用S3服务,因此可能需要搭建泛域名解析环境。 这里对于node1上的s3cmd这个客户端,我们不使用泛域名。 签名配置完s3cmd后,会在sdsceph用户的Home目录里生成.s3cfg配置文件,找到下面的内容:

1host_base = s3.amazonaws.com
2host_bucket = %(bucket)s.s3.amazonaws.com

修改成:

1host_base = node1:7480
2host_bucket = node1:7480/%(bucket)

下面对s3服务的使用做一下简单测试:

创建Bucket:

1s3cmd mb s3://mybucket
2Bucket 's3://mybucket/' created

上传Object:

1s3cmd put hello.txt s3://mybucket
2upload: 'hello.txt' -> 's3://mybucket/hello.txt'  [1 of 1]
3 12 of 12   100% in    1s     6.96 B/s  done

下载Object:

1cd /tmp
2s3cmd get s3://mybucket/hello.txt

上传并授予Object公开访问权限:

1s3cmd put --acl-public hello.txt s3://mybucket/a/b/helloworld.txt
2upload: 'hello.txt' -> 's3://mybucket/a/b/helloworld.txt'  [1 of 1]
3 12 of 12   100% in    0s   148.96 B/s  done
4Public URL of the object is: http://node1:7480/mybucket/a/b/helloworld.txt
1curl http://node3:7480/mybucket/a/b/helloworld.txt
2hello

查看Object:

1s3cmd ls -r s3://mybucket

参考 #

© 2024 青蛙小白