kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

最近发布的Kubernetes 1.15中,kubeadm对HA集群的配置已经达到beta可用,说明kubeadm距离生产环境中可用的距离越来越近了。

1.准备

1.1系统配置

在安装之前,需要先做如下准备。两台CentOS 7.6主机如下:

1cat /etc/hosts
2192.168.99.11 node1
3192.168.99.12 node2

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的"Check required ports"一节。 这里简单起见在各节点禁用防火墙:

1systemctl stop firewalld
2systemctl disable firewalld

禁用SELINUX:

1setenforce 0
1vi /etc/selinux/config
2SELINUX=disabled

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

1net.bridge.bridge-nf-call-ip6tables = 1
2net.bridge.bridge-nf-call-iptables = 1
3net.ipv4.ip_forward = 1

执行命令使修改生效。

1modprobe br_netfilter
2sysctl -p /etc/sysctl.d/k8s.conf

1.2kube-proxy开启ipvs的前置条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

1ip_vs
2ip_vs_rr
3ip_vs_wrr
4ip_vs_sh
5nf_conntrack_ipv4

在所有的Kubernetes节点node1和node2上执行以下脚本:

1cat > /etc/sysconfig/modules/ipvs.modules <<EOF
2#!/bin/bash
3modprobe -- ip_vs
4modprobe -- ip_vs_rr
5modprobe -- ip_vs_wrr
6modprobe -- ip_vs_sh
7modprobe -- nf_conntrack_ipv4
8EOF
9chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm

如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

1.3安装Docker

Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。

安装docker的yum源:

1yum install -y yum-utils device-mapper-persistent-data lvm2
2yum-config-manager \
3    --add-repo \
4    https://download.docker.com/linux/centos/docker-ce.repo

查看最新的Docker版本:

 1yum list docker-ce.x86_64  --showduplicates |sort -r
 2docker-ce.x86_64            3:18.09.7-3.el7                     docker-ce-stable
 3docker-ce.x86_64            3:18.09.6-3.el7                     docker-ce-stable
 4docker-ce.x86_64            3:18.09.5-3.el7                     docker-ce-stable
 5docker-ce.x86_64            3:18.09.4-3.el7                     docker-ce-stable
 6docker-ce.x86_64            3:18.09.3-3.el7                     docker-ce-stable
 7docker-ce.x86_64            3:18.09.2-3.el7                     docker-ce-stable
 8docker-ce.x86_64            3:18.09.1-3.el7                     docker-ce-stable
 9docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable
10docker-ce.x86_64            18.06.3.ce-3.el7                    docker-ce-stable
11docker-ce.x86_64            18.06.2.ce-3.el7                    docker-ce-stable
12docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
13docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
14docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
15docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
16...

Kubernetes 1.15当前支持的docker版本列表是1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。 这里在各节点安装docker的18.09.7版本。

1yum makecache fast
2
3yum install -y --setopt=obsoletes=0 \
4  docker-ce-18.09.7-3.el7 
5
6systemctl start docker
7systemctl enable docker

确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。

 1iptables -nvL
 2Chain INPUT (policy ACCEPT 263 packets, 19209 bytes)
 3 pkts bytes target     prot opt in     out     source               destination
 4
 5Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 6 pkts bytes target     prot opt in     out     source               destination
 7    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
 8    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0
 9    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
10    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
11    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
12    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

1.4 修改docker cgroup driver为systemd

根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。

创建或修改/etc/docker/daemon.json

1{
2  "exec-opts": ["native.cgroupdriver=systemd"]
3}

重启docker:

1systemctl restart docker
2
3docker info | grep Cgroup
4Cgroup Driver: systemd

2.使用kubeadm部署Kubernetes

2.1 安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet:

 1cat <<EOF > /etc/yum.repos.d/kubernetes.repo
 2[kubernetes]
 3name=Kubernetes
 4baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
 5enabled=1
 6gpgcheck=1
 7repo_gpgcheck=1
 8gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
 9        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
10EOF

测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要科学上网。

1curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
 1yum makecache fast
 2yum install -y kubelet kubeadm kubectl
 3
 4... 
 5Installed:
 6  kubeadm.x86_64 0:1.15.0-0                  kubectl.x86_64 0:1.15.0-0                      kubelet.x86_64 0:1.15.0-0                                 
 7
 8Dependency Installed:
 9  conntrack-tools.x86_64 0:1.4.4-4.el7            cri-tools.x86_64 0:1.12.0-0                   kubernetes-cni.x86_64 0:0.7.5-0     libnetfilter_cthelper.x86_64 0:1.0.0-9.el7    
10  libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7     libnetfilter_queue.x86_64 0:1.0.2-2.el7_2     socat.x86_64 0:1.7.3.2-2.el7 

从安装结果可以看出还安装了cri-tools, kubernetes-cni, socat三个依赖:

  • 官方从Kubernetes 1.14开始将cni依赖升级到了0.7.5版本
  • socat是kubelet的依赖
  • cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具

运行kubelet --help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,如:

1......
2--address 0.0.0.0   The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
3......

而官方推荐我们使用--config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file。这也是Kubernetes为了支持动态Kubelet配置(Dynamic Kubelet Configuration)才这么做的,参考Reconfigure a Node’s Kubelet in a Live Cluster

kubelet的配置文件必须是json或yaml格式,具体可查看这里

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。 关闭系统的Swap方法如下:

1swapoff -a

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

1vm.swappiness=0

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

因为这里本次用于测试两台主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的配置去掉这个限制。 使用kubelet的启动参数--fail-swap-on=false去掉必须关闭Swap的限制,修改/etc/sysconfig/kubelet,加入:

1KUBELET_EXTRA_ARGS=--fail-swap-on=false

2.2 使用kubeadm init初始化集群

在各节点开机启动kubelet服务:

1systemctl enable kubelet.service

使用kubeadm config print init-defaults可以打印集群初始化默认的使用的配置:

 1apiVersion: kubeadm.k8s.io/v1beta2
 2bootstrapTokens:
 3- groups:
 4  - system:bootstrappers:kubeadm:default-node-token
 5  token: abcdef.0123456789abcdef
 6  ttl: 24h0m0s
 7  usages:
 8  - signing
 9  - authentication
10kind: InitConfiguration
11localAPIEndpoint:
12  advertiseAddress: 1.2.3.4
13  bindPort: 6443
14nodeRegistration:
15  criSocket: /var/run/dockershim.sock
16  name: node1
17  taints:
18  - effect: NoSchedule
19    key: node-role.kubernetes.io/master
20---
21apiServer:
22  timeoutForControlPlane: 4m0s
23apiVersion: kubeadm.k8s.io/v1beta2
24certificatesDir: /etc/kubernetes/pki
25clusterName: kubernetes
26controllerManager: {}
27dns:
28  type: CoreDNS
29etcd:
30  local:
31    dataDir: /var/lib/etcd
32imageRepository: k8s.gcr.io
33kind: ClusterConfiguration
34kubernetesVersion: v1.14.0
35networking:
36  dnsDomain: cluster.local
37  serviceSubnet: 10.96.0.0/12
38scheduler: {}

从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml:

 1apiVersion: kubeadm.k8s.io/v1beta2
 2kind: InitConfiguration
 3localAPIEndpoint:
 4  advertiseAddress: 192.168.99.11
 5  bindPort: 6443
 6nodeRegistration:
 7  taints:
 8  - effect: PreferNoSchedule
 9    key: node-role.kubernetes.io/master
10---
11apiVersion: kubeadm.k8s.io/v1beta2
12kind: ClusterConfiguration
13kubernetesVersion: v1.15.0
14networking:
15  podSubnet: 10.244.0.0/16

使用kubeadm默认配置初始化的集群,会在master节点打上node-role.kubernetes.io/master:NoSchedule的污点,阻止master节点接受调度运行工作负载。这里测试环境只有两个节点,所以将这个taint修改为node-role.kubernetes.io/master:PreferNoSchedule

在开始初始化集群之前可以使用kubeadm config images pull预先在各个节点上拉取所k8s需要的docker镜像。

接下来使用kubeadm初始化集群,选择node1作为Master Node,在node1上执行下面的命令:

 1kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap
 2[init] Using Kubernetes version: v1.15.0
 3[preflight] Running pre-flight checks
 4	[WARNING Swap]: running with swap on is not supported. Please disable swap
 5[preflight] Pulling images required for setting up a Kubernetes cluster
 6[preflight] This might take a minute or two, depending on the speed of your internet connection
 7[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
 8[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
 9[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
10[kubelet-start] Activating the kubelet service
11[certs] Using certificateDir folder "/etc/kubernetes/pki"
12[certs] Generating "etcd/ca" certificate and key
13[certs] Generating "apiserver-etcd-client" certificate and key
14[certs] Generating "etcd/server" certificate and key
15[certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [192.168.99.11 127.0.0.1 ::1]
16[certs] Generating "etcd/peer" certificate and key
17[certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.99.11 127.0.0.1 ::1]
18[certs] Generating "etcd/healthcheck-client" certificate and key
19[certs] Generating "ca" certificate and key
20[certs] Generating "apiserver" certificate and key
21[certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.99.11]
22[certs] Generating "apiserver-kubelet-client" certificate and key
23[certs] Generating "front-proxy-ca" certificate and key
24[certs] Generating "front-proxy-client" certificate and key
25[certs] Generating "sa" key and public key
26[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
27[kubeconfig] Writing "admin.conf" kubeconfig file
28[kubeconfig] Writing "kubelet.conf" kubeconfig file
29[kubeconfig] Writing "controller-manager.conf" kubeconfig file
30[kubeconfig] Writing "scheduler.conf" kubeconfig file
31[control-plane] Using manifest folder "/etc/kubernetes/manifests"
32[control-plane] Creating static Pod manifest for "kube-apiserver"
33[control-plane] Creating static Pod manifest for "kube-controller-manager"
34[control-plane] Creating static Pod manifest for "kube-scheduler"
35[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
36[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
37[apiclient] All control plane components are healthy after 26.004907 seconds
38[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
39[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
40[upload-certs] Skipping phase. Please see --upload-certs
41[mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
42[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
43[bootstrap-token] Using token: 4qcl2f.gtl3h8e5kjltuo0r
44[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
45[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
46[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
47[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
48[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
49[addons] Applied essential addon: CoreDNS
50[addons] Applied essential addon: kube-proxy
51
52Your Kubernetes control-plane has initialized successfully!
53
54To start using your cluster, you need to run the following as a regular user:
55
56  mkdir -p $HOME/.kube
57  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
58  sudo chown $(id -u):$(id -g) $HOME/.kube/config
59
60You should now deploy a pod network to the cluster.
61Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
62  https://kubernetes.io/docs/concepts/cluster-administration/addons/
63
64Then you can join any number of worker nodes by running the following on each as root:
65
66kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \
67    --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e 

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:

  • [kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"

  • [certs]生成相关的各种证书

  • [kubeconfig]生成相关的kubeconfig文件

  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod

  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

  • 下面的命令是配置常规用户如何使用kubectl访问集群:

  • 1mkdir -p $HOME/.kube
    2sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    3sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  • 最后给出了将节点加入集群的命令kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \ --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e

查看一下集群状态,确认个组件都处于healthy状态:

1kubectl get cs
2NAME                 STATUS    MESSAGE             ERROR
3controller-manager   Healthy   ok                  
4scheduler            Healthy   ok                  
5etcd-0               Healthy   {"health":"true"} 

集群初始化如果遇到问题,可以使用下面的命令进行清理:

1kubeadm reset
2ifconfig cni0 down
3ip link delete cni0
4ifconfig flannel.1 down
5ip link delete flannel.1
6rm -rf /var/lib/cni/

2.3 安装Pod Network

接下来安装flannel network add-on:

 1mkdir -p ~/k8s/
 2cd ~/k8s
 3curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 4kubectl apply -f  kube-flannel.yml
 5
 6clusterrole.rbac.authorization.k8s.io/flannel created
 7clusterrolebinding.rbac.authorization.k8s.io/flannel created
 8serviceaccount/flannel created
 9configmap/kube-flannel-cfg created
10daemonset.extensions/kube-flannel-ds-amd64 created
11daemonset.extensions/kube-flannel-ds-arm64 created
12daemonset.extensions/kube-flannel-ds-arm created
13daemonset.extensions/kube-flannel-ds-ppc64le created
14daemonset.extensions/kube-flannel-ds-s390x created

这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64

如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上--iface=<iface-name>

 1......
 2containers:
 3      - name: kube-flannel
 4        image: quay.io/coreos/flannel:v0.11.0-amd64
 5        command:
 6        - /opt/bin/flanneld
 7        args:
 8        - --ip-masq
 9        - --kube-subnet-mgr
10        - --iface=eth1
11......

使用kubectl get pod --all-namespaces -o wide确保所有的Pod都处于Running状态。

 1kubectl get pod -n kube-system
 2NAME                            READY   STATUS    RESTARTS   AGE
 3coredns-5c98db65d4-dr8lf        1/1     Running   0          52m
 4coredns-5c98db65d4-lp8dg        1/1     Running   0          52m
 5etcd-node1                      1/1     Running   0          51m
 6kube-apiserver-node1            1/1     Running   0          51m
 7kube-controller-manager-node1   1/1     Running   0          51m
 8kube-flannel-ds-amd64-mm296     1/1     Running   0          44s
 9kube-proxy-kchkf                1/1     Running   0          52m
10kube-scheduler-node1            1/1     Running   0          51m

2.4 测试集群DNS是否可用

1kubectl run curl --image=radial/busyboxplus:curl -it
2kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
3If you don't see a command prompt, try pressing enter.
4[ root@curl-5cc7b478b6-r997p:/ ]$ 

进入后执行nslookup kubernetes.default确认解析正常:

1nslookup kubernetes.default
2Server:    10.96.0.10
3Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
4
5Name:      kubernetes.default
6Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

2.5 向Kubernetes集群中添加Node节点

下面将node2这个主机添加到Kubernetes集群中,在node2上执行:

 1kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \
 2    --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e \
 3 --ignore-preflight-errors=Swap
 4
 5[preflight] Running pre-flight checks
 6	[WARNING Swap]: running with swap on is not supported. Please disable swap
 7	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
 8[preflight] Reading configuration from the cluster...
 9[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
10[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
11[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
12[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
13[kubelet-start] Activating the kubelet service
14[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
15
16This node has joined the cluster:
17* Certificate signing request was sent to apiserver and a response was received.
18* The Kubelet was informed of the new secure connection details.
19
20Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:

1kubectl get node
2NAME    STATUS   ROLES    AGE   VERSION
3node1   Ready    master   57m   v1.15.0
4node2   Ready    <none>   11s   v1.15.0

2.5.1 如何从集群中移除Node

如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

1kubectl drain node2 --delete-local-data --force --ignore-daemonsets
2kubectl delete node node2

在node2上执行:

1kubeadm reset
2ifconfig cni0 down
3ip link delete cni0
4ifconfig flannel.1 down
5ip link delete flannel.1
6rm -rf /var/lib/cni/

在node1上执行:

1kubectl delete node node2

2.6 kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs"

1kubectl edit cm kube-proxy -n kube-system

之后重启各个节点上的kube-proxy pod:

1kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
 1kubectl get pod -n kube-system | grep kube-proxy
 2kube-proxy-7fsrg                1/1     Running   0          3s
 3kube-proxy-k8vhm                1/1     Running   0          9s
 4
 5kubectl logs kube-proxy-7fsrg  -n kube-system
 6I0703 04:42:33.308289       1 server_others.go:170] Using ipvs Proxier.
 7W0703 04:42:33.309074       1 proxier.go:401] IPVS scheduler not specified, use rr by default
 8I0703 04:42:33.309831       1 server.go:534] Version: v1.15.0
 9I0703 04:42:33.320088       1 conntrack.go:52] Setting nf_conntrack_max to 131072
10I0703 04:42:33.320365       1 config.go:96] Starting endpoints config controller
11I0703 04:42:33.320393       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
12I0703 04:42:33.320455       1 config.go:187] Starting service config controller
13I0703 04:42:33.320470       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
14I0703 04:42:33.420899       1 controller_utils.go:1036] Caches are synced for endpoints config controller
15I0703 04:42:33.420969       1 controller_utils.go:1036] Caches are synced for service config controller

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

3.Kubernetes常用组件部署

越来越多的公司和团队开始使用Helm这个Kubernetes的包管理器,这里也将使用Helm安装Kubernetes的常用组件。

3.1 Helm的安装

Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。 下载helm命令行工具到master节点node1的/usr/local/bin下,这里下载的2.14.1版本:

1curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz
2tar -zxvf helm-v2.14.1-linux-amd64.tar.gz
3cd linux-amd64/
4cp helm /usr/local/bin/

为了安装服务端tiller,还需要在这台机器上配置好kubectl工具和kubeconfig文件,确保kubectl工具可以在这台机器上访问apiserver且正常使用。 这里的node1节点已经配置好了kubectl。

因为Kubernetes APIServer开启了RBAC访问控制,所以需要创建tiller使用的service account: tiller并分配合适的角色给它。 详细内容可以查看helm文档中的Role-based Access Control。 这里简单起见直接分配cluster-admin这个集群内置的ClusterRole给它。创建helm-rbac.yaml文件:

 1apiVersion: v1
 2kind: ServiceAccount
 3metadata:
 4  name: tiller
 5  namespace: kube-system
 6---
 7apiVersion: rbac.authorization.k8s.io/v1beta1
 8kind: ClusterRoleBinding
 9metadata:
10  name: tiller
11roleRef:
12  apiGroup: rbac.authorization.k8s.io
13  kind: ClusterRole
14  name: cluster-admin
15subjects:
16  - kind: ServiceAccount
17    name: tiller
18    namespace: kube-system
1kubectl create -f helm-rbac.yaml
2serviceaccount/tiller created
3clusterrolebinding.rbac.authorization.k8s.io/tiller created

接下来使用helm部署tiller:

 1helm init --service-account tiller --skip-refresh
 2Creating /root/.helm
 3Creating /root/.helm/repository
 4Creating /root/.helm/repository/cache
 5Creating /root/.helm/repository/local
 6Creating /root/.helm/plugins
 7Creating /root/.helm/starters
 8Creating /root/.helm/cache/archive
 9Creating /root/.helm/repository/repositories.yaml
10Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
11Adding local repo with URL: http://127.0.0.1:8879/charts
12$HELM_HOME has been configured at /root/.helm.
13
14Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
15
16Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
17To prevent this, run `helm init` with the --tiller-tls-verify flag.
18For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
19Happy Helming!

tiller默认被部署在k8s集群中的kube-system这个namespace下:

1kubectl get pod -n kube-system -l app=helm
2NAME                            READY   STATUS    RESTARTS   AGE
3tiller-deploy-c4fd4cd68-dwkhv   1/1     Running   0          83s
1helm version
2Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
3Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}

注意由于某些原因需要网络可以访问gcr.io和kubernetes-charts.storage.googleapis.com,如果无法访问可以通过helm init --service-account tiller --tiller-image <your-docker-registry>/tiller:v2.13.1 --skip-refresh使用私有镜像仓库中的tiller镜像

最后在node1上修改helm chart仓库的地址为azure提供的镜像地址:

1helm repo add stable http://mirror.azure.cn/kubernetes/charts
2"stable" has been added to your repositories
3
4helm repo list
5NAME  	URL                                     
6stable	http://mirror.azure.cn/kubernetes/charts
7local 	http://127.0.0.1:8879/charts    

3.2 使用Helm部署Nginx Ingress

为了便于将集群中的服务暴露到集群外部,需要使用Ingress。接下来使用Helm将Nginx Ingress部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的边缘节点上,关于Kubernetes边缘节点的高可用相关的内容可以查看之前整理的Bare metal环境下Kubernetes Ingress边缘节点的高可用,Ingress Controller使用hostNetwork

我们将node1(192.168.99.11)做为边缘节点,打上Label:

1kubectl label node node1 node-role.kubernetes.io/edge=
2node/node1 labeled
3
4
5kubectl get node
6NAME    STATUS   ROLES         AGE    VERSION
7node1   Ready    edge,master   138m   v1.15.0
8node2   Ready    <none>        82m    v1.15.0

stable/nginx-ingress chart的值文件ingress-nginx.yaml如下:

 1controller:
 2  replicaCount: 1
 3  hostNetwork: true
 4  nodeSelector:
 5    node-role.kubernetes.io/edge: ''
 6  affinity:
 7    podAntiAffinity:
 8        requiredDuringSchedulingIgnoredDuringExecution:
 9        - labelSelector:
10            matchExpressions:
11            - key: app
12              operator: In
13              values:
14              - nginx-ingress
15            - key: component
16              operator: In
17              values:
18              - controller
19          topologyKey: kubernetes.io/hostname
20  tolerations:
21      - key: node-role.kubernetes.io/master
22        operator: Exists
23        effect: NoSchedule
24      - key: node-role.kubernetes.io/master
25        operator: Exists
26        effect: PreferNoSchedule
27defaultBackend:
28  nodeSelector:
29    node-role.kubernetes.io/edge: ''
30  tolerations:
31      - key: node-role.kubernetes.io/master
32        operator: Exists
33        effect: NoSchedule
34      - key: node-role.kubernetes.io/master
35        operator: Exists
36        effect: PreferNoSchedule

nginx ingress controller的副本数replicaCount为1,将被调度到node1这个边缘节点上。这里并没有指定nginx ingress controller service的externalIPs,而是通过hostNetwork: true设置nginx ingress controller使用宿主机网络。

1helm repo update
2
3helm install stable/nginx-ingress \
4-n nginx-ingress \
5--namespace ingress-nginx  \
6-f ingress-nginx.yaml
1kubectl get pod -n ingress-nginx -o wide
2NAME                                            READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
3nginx-ingress-controller-cc9b6d55b-pr8vr        1/1     Running   0          10m   192.168.99.11   node1   <none>           <none>
4nginx-ingress-default-backend-cc888fd56-bf4h2   1/1     Running   0          10m   10.244.0.14     node1   <none>           <none>

如果访问http://192.168.99.11返回default backend,则部署完成。

3.3 使用Helm部署dashboard

kubernetes-dashboard.yaml:

 1image:
 2  repository: k8s.gcr.io/kubernetes-dashboard-amd64
 3  tag: v1.10.1
 4ingress:
 5  enabled: true
 6  hosts: 
 7    - k8s.frognew.com
 8  annotations:
 9    nginx.ingress.kubernetes.io/ssl-redirect: "true"
10    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
11  tls:
12    - secretName: frognew-com-tls-secret
13      hosts:
14      - k8s.frognew.com
15nodeSelector:
16    node-role.kubernetes.io/edge: ''
17tolerations:
18    - key: node-role.kubernetes.io/master
19      operator: Exists
20      effect: NoSchedule
21    - key: node-role.kubernetes.io/master
22      operator: Exists
23      effect: PreferNoSchedule
24rbac:
25  clusterAdminRole: true
1helm install stable/kubernetes-dashboard \
2-n kubernetes-dashboard \
3--namespace kube-system  \
4-f kubernetes-dashboard.yaml
 1kubectl -n kube-system get secret | grep kubernetes-dashboard-token
 2kubernetes-dashboard-token-pkm2s                 kubernetes.io/service-account-token   3      3m7s
 3
 4kubectl describe -n kube-system secret/kubernetes-dashboard-token-pkm2s
 5Name:         kubernetes-dashboard-token-pkm2s
 6Namespace:    kube-system
 7Labels:       <none>
 8Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
 9              kubernetes.io/service-account.uid: 2f0781dd-156a-11e9-b0f0-080027bb7c43
10
11Type:  kubernetes.io/service-account-token
12
13Data
14====
15ca.crt:     1025 bytes
16namespace:  11 bytes
17token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1wa20ycyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmMDc4MWRkLTE1NmEtMTFlOS1iMGYwLTA4MDAyN2JiN2M0MyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.24ad6ZgZMxdydpwlmYAiMxZ9VSIN7dDR7Q6-RLW0qC81ajXoQKHAyrEGpIonfld3gqbE0xO8nisskpmlkQra72-9X6sBPoByqIKyTsO83BQlME2sfOJemWD0HqzwSCjvSQa0x-bUlq9HgH2vEXzpFuSS6Svi7RbfzLXlEuggNoC4MfA4E2hF1OX_ml8iAKx-49y1BQQe5FGWyCyBSi1TD_-ZpVs44H5gIvsGK2kcvi0JT4oHXtWjjQBKLIWL7xxyRCSE4HmUZT2StIHnOwlX7IEIB0oBX4mPg2_xNGnqwcu-8OERU9IoqAAE2cZa0v3b5O2LMcJPrcxrVOukvRIumA

在dashboard的登录窗口使用上面的token登录。

dashboard

3.4 使用Helm部署metrics-server

从Heapster的github https://github.com/kubernetes/heapster中可以看到已经,heapster已经DEPRECATED。 这里是heapster的deprecation timeline。 可以看出heapster从Kubernetes 1.12开始从Kubernetes各种安装脚本中移除。

Kubernetes推荐使用metrics-server。我们这里也使用helm来部署metrics-server。

metrics-server.yaml:

 1args:
 2- --logtostderr
 3- --kubelet-insecure-tls
 4- --kubelet-preferred-address-types=InternalIP
 5nodeSelector:
 6    node-role.kubernetes.io/edge: ''
 7tolerations:
 8    - key: node-role.kubernetes.io/master
 9      operator: Exists
10      effect: NoSchedule
11    - key: node-role.kubernetes.io/master
12      operator: Exists
13      effect: PreferNoSchedule
1helm install stable/metrics-server \
2-n metrics-server \
3--namespace kube-system \
4-f metrics-server.yaml

使用下面的命令可以获取到关于集群节点基本的指标信息:

1kubectl top node
2NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
3node1   650m         32%    1276Mi          73%
4node2   73m          3%     527Mi           30%
 1kubectl top pod -n kube-system
 2NAME                                    CPU(cores)   MEMORY(bytes)   
 3coredns-5c98db65d4-dr8lf                8m           7Mi             
 4coredns-5c98db65d4-lp8dg                6m           8Mi             
 5etcd-node1                              44m          46Mi            
 6kube-apiserver-node1                    74m          295Mi           
 7kube-controller-manager-node1           35m          50Mi            
 8kube-flannel-ds-amd64-7lwm9             2m           8Mi             
 9kube-flannel-ds-amd64-mm296             5m           9Mi             
10kube-proxy-7fsrg                        1m           11Mi            
11kube-proxy-k8vhm                        3m           11Mi            
12kube-scheduler-node1                    8m           15Mi            
13kubernetes-dashboard-848b8dd798-c4sc2   2m           14Mi            
14metrics-server-8456fb6676-fwh2t         10m          19Mi            
15tiller-deploy-7bf78cdbf7-9q94c          1m           16Mi    

遗憾的是,当前Kubernetes Dashboard还不支持metrics-server。因此如果使用metrics-server替代了heapster,将无法在dashboard中以图形展示Pod的内存和CPU情况(实际上这也不是很重要,当前我们是在Prometheus和Grafana中定制的Kubernetes集群中各个Pod的监控,因此在dashboard中查看Pod内存和CPU也不是很重要)。 Dashboard的github上有很多这方面的讨论,如https://github.com/kubernetes/dashboard/issues/2986,Dashboard已经准备在将来的某个时间点支持metrics-server。但由于metrics-server和metrics pipeline肯定是Kubernetes在monitor方面未来的方向,所以推荐使用metrics-server。

4.总结

本次安装涉及到的Docker镜像:

 1# network and dns
 2quay.io/coreos/flannel:v0.11.0-amd64
 3k8s.gcr.io/coredns:1.3.1
 4
 5
 6# helm and tiller
 7gcr.io/kubernetes-helm/tiller:v2.14.1
 8
 9# nginx ingress
10quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
11k8s.gcr.io/defaultbackend:1.5
12
13# dashboard and metric-sever
14k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
15gcr.io/google_containers/metrics-server-amd64:v0.3.2

参考