Kubernetes Ingress实战(五):Bare metal环境下Kubernetes Ingress边缘节点的高可用(基于IPVS)

Kubernetes Ingress实战(五):Bare metal环境下Kubernetes Ingress边缘节点的高可用(基于IPVS)

2018-10-21
Kubernetes

前面我们基于Keepavlied实现了Kubernetes集群边缘节点的高可用,详见《Kubernetes Ingress实战(四):Bare metal环境下Kubernetes Ingress边缘节点的高可用》。当kube-proxy开启了ipvs模式后,可以不再使用keepalived,ingress controller的采用externalIp的Service,externalIp指定的就是VIP,由kube-proxy ipvs接管。试验环境如下:

1kubectl get node -o wide
2NAME    STATUS   ROLES         AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
3node1   Ready    edge,master   5h58m   v1.12.0   192.168.61.11   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.6.1
4node2   Ready    edge          5h55m   v1.12.0   192.168.61.12   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.6.1

node1是master节点,同时我们希望node1、node2同时作为集群的edge节点。我们还是使用helm来部署nginx ingress,对stable/nginx-ingress chart的值文件ingress-nginx.yaml稍作调整:

 1controller:
 2  replicaCount: 2
 3  service:
 4    externalIPs:
 5      - 192.168.61.10
 6  nodeSelector:
 7    node-role.kubernetes.io/edge: ''
 8  affinity:
 9    podAntiAffinity:
10        requiredDuringSchedulingIgnoredDuringExecution:
11        - labelSelector:
12            matchExpressions:
13            - key: app 
14              operator: In
15              values:
16              - nginx-ingress
17            - key: component
18              operator: In
19              values:
20              - controller
21          topologyKey: kubernetes.io/hostname
22  tolerations:
23      - key: node-role.kubernetes.io/master
24        operator: Exists
25        effect: NoSchedule
26
27defaultBackend:
28  nodeSelector:
29    node-role.kubernetes.io/edge: ''
30  tolerations:
31      - key: node-role.kubernetes.io/master
32        operator: Exists
33        effect: NoSchedule

nginx ingress controller的副本数replicaCount为2,将被调度到node1和node2这两个边缘节点上。externalIPs指定的192.168.61.10为VIP,将绑定到kube-proxy kube-ipvs0网卡上。

1helm install stable/nginx-ingress \
2-n nginx-ingress \
3--namespace ingress-nginx  \
4-f ingress-nginx.yaml

Service nginx-ingress-controller

1kubectl get svc -n ingress-nginx
2NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
3nginx-ingress-controller        LoadBalancer   10.99.214.125   192.168.61.10   80:30750/TCP,443:30961/TCP   4m48s
4nginx-ingress-default-backend   ClusterIP      10.105.78.103   <none>          80/TCP                       4m48s

在node1上查看kube-ipvs0网卡:

 1ip addr sh kube-ipvs0
 26: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
 3    link/ether f6:3b:12:a5:79:82 brd ff:ff:ff:ff:ff:ff
 4    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
 5       valid_lft forever preferred_lft forever
 6    inet 10.108.71.144/32 brd 10.108.71.144 scope global kube-ipvs0
 7       valid_lft forever preferred_lft forever
 8    inet 10.101.228.188/32 brd 10.101.228.188 scope global kube-ipvs0
 9       valid_lft forever preferred_lft forever
10    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
11       valid_lft forever preferred_lft forever
12    inet 10.99.214.125/32 brd 10.99.214.125 scope global kube-ipvs0
13       valid_lft forever preferred_lft forever
14    inet 192.168.61.10/32 brd 192.168.61.10 scope global kube-ipvs0
15       valid_lft forever preferred_lft forever
16    inet 10.105.78.103/32 brd 10.105.78.103 scope global kube-ipvs0
17       valid_lft forever preferred_lft forever

在node2上查看kube-ipvs0网卡:

 1ip addr sh kube-ipvs0
 26: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
 3    link/ether fa:c5:24:df:22:eb brd ff:ff:ff:ff:ff:ff
 4    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
 5       valid_lft forever preferred_lft forever
 6    inet 10.108.71.144/32 brd 10.108.71.144 scope global kube-ipvs0
 7       valid_lft forever preferred_lft forever
 8    inet 10.101.228.188/32 brd 10.101.228.188 scope global kube-ipvs0
 9       valid_lft forever preferred_lft forever
10    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
11       valid_lft forever preferred_lft forever
12    inet 10.99.214.125/32 brd 10.99.214.125 scope global kube-ipvs0
13       valid_lft forever preferred_lft forever
14    inet 192.168.61.10/32 brd 192.168.61.10 scope global kube-ipvs0
15       valid_lft forever preferred_lft forever
16    inet 10.105.78.103/32 brd 10.105.78.103 scope global kube-ipvs0
17       valid_lft forever preferred_lft forever

可以在kube-ipvs0上看到192.168.61.10这个VIP。

关于Service的externalIps(外部IPs) #

Kubernetes Service的externalIps(外部IP)要求是至少能路由到一个k8s节点上的。 即如果有外部IP可以路由到一个或多个k8s节点上,就可以把k8s的Service暴露在这个外部IP上,通过访问外部IP+Service的端口将流量接入到集群内。这里需要明确externalIps(外部IP)不是Kubernetes管理,而是集群管理员负责的,即集群管理员要保证externalIps(外部IP)可以被至少路由到一个k8s节点上。

前面在Kubernetes Ingress实战(四):Bare metal环境下Kubernetes Ingress边缘节点的高可用Kubernetes Ingress实战(五):Bare metal环境下Kubernetes Ingress边缘节点的高可用(基于IPVS)中都使用了externalIps来暴露nginx-ingress-controller的Service到集群外部,前者的kube-proxy没有开启IPVS,后者的kube-proxy开启了IPVS。 对于开启了IPVS的情况,kube-proxy会在所有Kubernetes节点上绑定externalIps,这里是inet 192.168.61.10/32 brd 192.168.61.10 scope global kube-ipvs0。 因此需要集群管理员确保192.168.61.10可以被路由到Kubernetes的edge节点上。

© 2024 青蛙小白