Redis 3.2集群环境搭建和测试
2017-03-06
Redis 3开始支持了Cluster模式,增强了Redis的水平扩展能力,Redis Cluster的节点分片通过hash slot实现,每个节点上的键都属于16384(0~16383)个slots中的一个,每个节点负责处理一部分slots。 Redis Cluster采用无中心节点方式实现,无需proxy代理,客户端直接与redis集群的每个节点连接,根据同样的hash算法计算出key对应的slot,然后直接在slot对应的Redis上执行命令。
Redis安装 #
环境准备 #
CentOS 7.2
- redis1 192.168.61.51 6379(master) 6380(slave)
- redis2 192.168.61.52 6379(master) 6380(slave)
- redis3 192.168.61.53 6379(master) 6380(slave)
修改内核参数
vm.overcommit_memory = 1
1vi /etc/sysctl.conf 2vm.overcommit_memory = 1 3 4sysctl -p
编译安装Redis #
1wget http://download.redis.io/releases/redis-3.2.8.tar.gz
2tar -zxvf redis-3.2.8.tar.gz
3cd redis-3.2.8
4make
5make install PREFIX=/usr/local/redis
6
7mkdir /usr/local/redis/etc
8cp redis.conf /usr/local/redis/etc/
测试启动:
1/usr/local/redis/bin/redis-server /usr/local/redis/etc/redis.conf
集群初始化 #
配置 #
redis_6379.conf
1daemonize yes
2pidfile /var/run/redis_6379.pid
3dir /var/lib/redis/redis_6379
4logfile /var/log/redis/redis_6379.log
5appendonly yes
6bind 192.168.61.51
7# bind 192.168.61.52
8# bind 192.168.61.53
9port 6379
10
11cluster-enabled yes
12cluster-config-file nodes-6379.conf
13cluster-node-timeout 15000
14cluster-slave-validity-factor 10
15cluster-migration-barrier 1
16cluster-require-full-coverage yes
redis_6380.conf
1daemonize yes
2pidfile /var/run/redis_6380.pid
3dir /var/lib/redis/redis_6380
4logfile /var/log/redis/redis_6380.log
5appendonly yes
6bind 192.168.61.51
7# bind 192.168.61.52
8# bind 192.168.61.53
9port 6380
10
11cluster-enabled yes
12cluster-config-file nodes-6380.conf
13cluster-node-timeout 15000
14cluster-slave-validity-factor 10
15cluster-migration-barrier 1
16cluster-require-full-coverage yes
集群相关的配置:
- cluster-enabled 是否开启集群配置
- cluster-config-file 集群节点集群信息配置文件,每个节点都有一个,由redis生成和更新,配置时避免名称冲突
- cluster-node-timeout 集群节点互连超时的阀值,单位毫秒
- cluster-slave-validity-factor 进行故障转移时,salve会 申请成为master。有时slave会和master失联很久导致数据较旧,这样的slave不应该成为master。这个配置用来判断slave是否和master失联时间过长。
1groupadd redis
2useradd -g redis -d /var/lib/redis -s /sbin/nologin redis
3mkdir -p /var/log/redis
4mkdir -p /var/lib/redis/redis_6379
5mkdir -p /var/lib/redis/redis_6380
6chown -R redis:redis /var/log/redis /var/lib/redis
redis_6379.service
1[Unit]
2Description=redis
3After=network.target
4
5[Service]
6Type=forking
7User=redis
8ExecStart=/usr/local/redis/bin/redis-server /usr/local/redis/etc/redis_6379.conf
9Restart=on-failure
10
11[Install]
12WantedBy=multi-user.target
redis_6380.service
1[Unit]
2Description=redis
3After=network.target
4
5[Service]
6Type=forking
7User=redis
8ExecStart=/usr/local/redis/bin/redis-server /usr/local/redis/etc/redis_6380.conf
9Restart=on-failure
10
11[Install]
12WantedBy=multi-user.target
1systemctl enable redis_6379
2systemctl enable redis_6380
3systemctl start redis_6379
4systemctl start redis_6380
主节点 #
配置成功后,在各个主机上查看redis进程,各个redis节点都运行在cluster模式下:
1 ps -ef | grep redis
2redis 2656 1 0 15:09 ? 00:00:00 /usr/local/redis/bin/redis-server 192.168.61.51:6379 [cluster]
3redis 2666 1 0 15:09 ? 00:00:00 /usr/local/redis/bin/redis-server 192.168.61.51:6380 [cluster]
使用cluster meet
添加主节点:
1./redis-cli -h 192.168.61.51 -p 6379 -c
2
3cluster meet 192.168.61.52 6379
4cluster meet 192.168.61.53 6379
查看集群状态:
1cluster info
2
3cluster_state:fail #集群状态
4cluster_slots_assigned:0 #分配的slot数
5cluster_slots_ok:0 #正确的slot数
6cluster_slots_pfail:0
7cluster_slots_fail:0
8cluster_known_nodes:3 #当前的节点数
9cluster_size:0
10cluster_current_epoch:2
11cluster_my_epoch:0
12cluster_stats_messages_sent:171
13cluster_stats_messages_received:171
上面显示cluster的状态是fail,是由于没有分配slot,需要把16384分配到3个Node上,编写addslots脚本:
addslots.sh
1#!/bin/bash
2for ((i=$2;i<=$3;i++))
3do
4 /usr/local/redis/bin/redis-cli -h $1 -p 6379 CLUSTER ADDSLOTS $i
5done
分别在redis1,redis2,redis3三个主机上执行:
1./addslots.sh redis1 0 5461
2./addslots.sh redis2 5462 10922
3./addslots.sh redis3 10923 16383
再次查看集群状态:
1CLUSTER INFO
2cluster_state:ok
3cluster_slots_assigned:16384
4cluster_slots_ok:16384
5cluster_slots_pfail:0
6cluster_slots_fail:0
7cluster_known_nodes:3
8cluster_size:3
9cluster_current_epoch:2
10cluster_my_epoch:0
11cluster_stats_messages_sent:3495
12cluster_stats_messages_received:3495
从节点 #
先添加从节点到集群中:
1cluster meet 192.168.61.51 6380
2cluster meet 192.168.61.52 6380
3cluster meet 192.168.61.53 6380
1CLUSTER INFO
2cluster_state:ok
3cluster_slots_assigned:16384
4cluster_slots_ok:16384
5cluster_slots_pfail:0
6cluster_slots_fail:0
7cluster_known_nodes:6 #所有节点数6个
8cluster_size:3 #主节点数3个
9cluster_current_epoch:5
10cluster_my_epoch:0
11cluster_stats_messages_sent:4388
12cluster_stats_messages_received:4388
查看各节点id:
1CLUSTER NODES
25daaec9170e2cb0e8e4dfa7e313aa4749c9a0b68 192.168.61.53:6380 master - 0 1488959931731 5 connected
38f60e94a47da98286795b2fbbcae1b3aea75e123 192.168.61.51:6380 master - 0 1488959933751 3 connected
4bd503b887831e642ac850af3fdd6e27c27086db7 192.168.61.51:6379 myself,master - 0 0 0 connected 0-5461
5956f9f95b96bb02842609cea5dcda0cf89bfb78e 192.168.61.53:6379 master - 0 1488959935772 2 connected 10923-16383
6877a1fad6a6c344a69707a335d2dcf94afeecbbd 192.168.61.52:6380 master - 0 1488959934761 4 connected
77290136f19a26c877c31f565068ea02badf11339 192.168.61.52:6379 master - 0 1488959936786 1 connected 5462-10922
下面分配从节点,主从关系如下:
1master 192.168.61.51 6379 -> slave 192.168.61.52 6380
2master 192.168.61.52 6379 -> slave 192.168.61.53 6380
3master 192.168.61.53 6379 -> slave 192.168.61.51 6380
1./redis-cli -h 192.168.61.51 -p 6380 -c
2CLUSTER REPLICATE 5daaec9170e2cb0e8e4dfa7e313aa4749c9a0b68
3
4
5./redis-cli -h 192.168.61.52 -p 6380 -c
6CLUSTER REPLICATE bd503b887831e642ac850af3fdd6e27c27086db7
7
8./redis-cli -h 192.168.61.53 -p 6380 -c
9CLUSTER REPLICATE 7290136f19a26c877c31f565068ea02badf11339
1 CLUSTER NODES
2877a1fad6a6c344a69707a335d2dcf94afeecbbd 192.168.61.52:6380 slave bd503b887831e642ac850af3fdd6e27c27086db7 0 1488960443865 4 connected
37290136f19a26c877c31f565068ea02badf11339 192.168.61.52:6379 master - 0 1488960447913 1 connected 5462-10922
4956f9f95b96bb02842609cea5dcda0cf89bfb78e 192.168.61.53:6379 master - 0 1488960444893 2 connected 10923-16383
55daaec9170e2cb0e8e4dfa7e313aa4749c9a0b68 192.168.61.53:6380 myself,slave 7290136f19a26c877c31f565068ea02badf11339 0 0 5 connected
68f60e94a47da98286795b2fbbcae1b3aea75e123 192.168.61.51:6380 slave 5daaec9170e2cb0e8e4dfa7e313aa4749c9a0b68 0 1488960446907 5 connected
7bd503b887831e642ac850af3fdd6e27c27086db7 192.168.61.51:6379 master - 0 1488960445904 0 connected 0-5461
到这一步Redis 3的高可用和分配集群搭建完毕。
基于ruby的集群管理工具 #
安装ruby #
安装ruby:
1wget https://cache.ruby-lang.org/pub/ruby/2.4/ruby-2.4.0.tar.gz
2tar -zxvf ruby-2.4.0.tar.gz
3cd ruby-2.4.0
4./configure prefix=/usr/local/ruby
5make
6make install
7
8echo 'export PATH="$PATH:/usr/local/ruby/bin"' >> /etc/profile
9source /etc/profile
10
11ruby -v
12ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-linux]
安装rubygems:
1wget https://rubygems.org/rubygems/rubygems-2.6.10.tgz
2tar -zxvf rubygems-2.6.10.tgz
3cd rubygems-2.6.10
4ruby setup.rb
5
6gem -v
72.6.10
安装zlib:
1wget http://www.zlib.net/zlib-1.2.11.tar.gz
2tar -zxvf zlib-1.2.11.tar.gz
3cd zlib-1.2.11
4./configure --prefix=/usr/local/zlib
5make
6make install
安装 ruby-zlib:
1cd ruby-2.4.0/ext/zlib
2ruby ./extconf.rb --with-zlib-dir=/usr/local/zlib
3make
4make install
上面的命令在
make
时报make: *** No rule to make target /include/ruby.h', needed by zlib.o'. Stop.
。修改Makefile文件:
1ORIG_SRCS = /root/source/ruby-2.4.0/ext/zlib/zlib.c
2SRCS = $(ORIG_SRCS)
3OBJS = /root/source/ruby-2.4.0/ext/zlib/zlib.o
安装redis gem:
1gem sources --add http://rubygems.org
2gem sources --remove https://rubygems.org/
3gem sources list
4
5gem q -rn redis
6gem install redis
redis源码下的redis-trib.rb即为集群管理工具:
1./redis-trib.rb
2Usage: redis-trib <command> <options> <arguments ...>
3
4 create host1:port1 ... hostN:portN
5 --replicas <arg>
6 check host:port
7 info host:port
8 fix host:port
9 --timeout <arg>
10 reshard host:port
11 --from <arg>
12 --to <arg>
13 --slots <arg>
14 --yes
15 --timeout <arg>
16 --pipeline <arg>
17 rebalance host:port
18 --weight <arg>
19 --auto-weights
20 --use-empty-masters
21 --timeout <arg>
22 --simulate
23 --pipeline <arg>
24 --threshold <arg>
25 add-node new_host:new_port existing_host:existing_port
26 --slave
27 --master-id <arg>
28 del-node host:port node_id
29 set-timeout host:port milliseconds
30 call host:port command arg arg .. arg
31 import host:port
32 --from <arg>
33 --copy
34 --replace
35 help (show this help)