【注意】最后更新于 October 21, 2018,文中内容可能已过时,请谨慎使用。
前面我们基于Keepavlied实现了Kubernetes集群边缘节点的高可用,详见《Kubernetes Ingress实战(四):Bare metal环境下Kubernetes Ingress边缘节点的高可用》。当kube-proxy开启了ipvs模式后,可以不再使用keepalived,ingress controller的采用externalIp的Service,externalIp指定的就是VIP,由kube-proxy ipvs接管。试验环境如下:
1
2
3
4
|
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready edge,master 5h58m v1.12.0 192.168.61.11 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://18.6.1
node2 Ready edge 5h55m v1.12.0 192.168.61.12 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://18.6.1
|
node1是master节点,同时我们希望node1、node2同时作为集群的edge节点。我们还是使用helm来部署nginx ingress,对stable/nginx-ingress chart的值文件ingress-nginx.yaml稍作调整:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
controller:
replicaCount: 2
service:
externalIPs:
- 192.168.61.10
nodeSelector:
node-role.kubernetes.io/edge: ''
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx-ingress
- key: component
operator: In
values:
- controller
topologyKey: kubernetes.io/hostname
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
defaultBackend:
nodeSelector:
node-role.kubernetes.io/edge: ''
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
|
nginx ingress controller的副本数replicaCount为2,将被调度到node1和node2这两个边缘节点上。externalIPs指定的192.168.61.10为VIP,将绑定到kube-proxy
kube-ipvs0
网卡上。
1
2
3
4
|
helm install stable/nginx-ingress \
-n nginx-ingress \
--namespace ingress-nginx \
-f ingress-nginx.yaml
|
Service nginx-ingress-controller
:
1
2
3
4
|
kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.99.214.125 192.168.61.10 80:30750/TCP,443:30961/TCP 4m48s
nginx-ingress-default-backend ClusterIP 10.105.78.103 <none> 80/TCP 4m48s
|
在node1上查看kube-ipvs0
网卡:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
ip addr sh kube-ipvs0
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
link/ether f6:3b:12:a5:79:82 brd ff:ff:ff:ff:ff:ff
inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.108.71.144/32 brd 10.108.71.144 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.101.228.188/32 brd 10.101.228.188 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.99.214.125/32 brd 10.99.214.125 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 192.168.61.10/32 brd 192.168.61.10 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.105.78.103/32 brd 10.105.78.103 scope global kube-ipvs0
valid_lft forever preferred_lft forever
|
在node2上查看kube-ipvs0
网卡:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
ip addr sh kube-ipvs0
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
link/ether fa:c5:24:df:22:eb brd ff:ff:ff:ff:ff:ff
inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.108.71.144/32 brd 10.108.71.144 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.101.228.188/32 brd 10.101.228.188 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.99.214.125/32 brd 10.99.214.125 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 192.168.61.10/32 brd 192.168.61.10 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.105.78.103/32 brd 10.105.78.103 scope global kube-ipvs0
valid_lft forever preferred_lft forever
|
可以在kube-ipvs0上看到192.168.61.10
这个VIP。
关于Service的externalIps(外部IPs)
Kubernetes Service的externalIps(外部IP)要求是至少能路由到一个k8s节点上的。
即如果有外部IP可以路由到一个或多个k8s节点上,就可以把k8s的Service暴露在这个外部IP上,通过访问外部IP+Service的端口将流量接入到集群内。这里需要明确externalIps(外部IP)不是Kubernetes管理,而是集群管理员负责的,即集群管理员要保证externalIps(外部IP)可以被至少路由到一个k8s节点上。
前面在Kubernetes Ingress实战(四):Bare metal环境下Kubernetes Ingress边缘节点的高可用
和Kubernetes Ingress实战(五):Bare metal环境下Kubernetes Ingress边缘节点的高可用(基于IPVS)中都使用了externalIps来暴露nginx-ingress-controller
的Service到集群外部,前者的kube-proxy
没有开启IPVS,后者的kube-proxy
开启了IPVS。
对于开启了IPVS的情况,kube-proxy
会在所有Kubernetes节点上绑定externalIps,这里是inet 192.168.61.10/32 brd 192.168.61.10 scope global kube-ipvs0
。
因此需要集群管理员确保192.168.61.10可以被路由到Kubernetes的edge节点上。