kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

在Kubernetes的文档Using kubeadm to Create a Cluster中已经给出了目前kubeadm的主要特性已经处于beta状态了,在2018年将进入GA状态,说明kubeadm离可以在生产环境中使用的距离越来越近了。

当然我们线上稳定运行的Kubernetes集群是使用ansible以二进制形式的部署的高可用集群,这里体验Kubernetes 1.10中的kubeadm是为了跟随官方对集群初始化和配置方面的最佳实践,进一步完善我们的ansible部署脚本。

1.准备

1.1系统配置

在安装之前,需要先做如下准备。两台CentOS 7.4主机如下:

1cat /etc/hosts
2192.168.61.11 node1
3192.168.61.12 node2

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的"Check required ports"一节。 这里简单起见在各节点禁用防火墙:

1systemctl stop firewalld
2systemctl disable firewalld

禁用SELINUX:

1setenforce 0
1vi /etc/selinux/config
2SELINUX=disabled

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

1net.bridge.bridge-nf-call-ip6tables = 1
2net.bridge.bridge-nf-call-iptables = 1
3net.ipv4.ip_forward = 1

执行命令使修改生效。

1modprobe br_netfilter
2sysctl -p /etc/sysctl.d/k8s.conf

1.2安装Docker

1yum install -y yum-utils device-mapper-persistent-data lvm2
2yum-config-manager \
3    --add-repo \
4    https://download.docker.com/linux/centos/docker-ce.repo

查看当前的Docker版本:

 1yum list docker-ce.x86_64  --showduplicates |sort -r
 2docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
 3docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
 4docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
 5docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable
 6docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable
 7docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable
 8docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable
 9docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
10docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
11docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
12docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable

Kubernetes 1.10已经针对Docker的1.11, 1.12, 1.13.1和17.03等版本做了验证,需要注意Kubernetes 1.10最低支持的Docker版本是1.11。 我们这里在各节点安装docker的17.03.2版本。

1yum makecache fast
2
3yum install -y --setopt=obsoletes=0 \
4  docker-ce-17.03.2.ce-1.el7.centos \
5  docker-ce-selinux-17.03.2.ce-1.el7.centos
6
7systemctl start docker
8systemctl enable docker

Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信,在各个Docker节点执行下面的命令:

1iptables -P FORWARD ACCEPT

可在docker的systemd unit文件中以ExecStartPost加入上面的命令:

1ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
1systemctl daemon-reload
2systemctl restart docker

2.安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet:

 1cat <<EOF > /etc/yum.repos.d/kubernetes.repo
 2[kubernetes]
 3name=Kubernetes
 4baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
 5enabled=1
 6gpgcheck=1
 7repo_gpgcheck=1
 8gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
 9        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
10EOF

测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要科学上网。

1curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
1yum makecache fast
2yum install -y kubelet kubeadm kubectl
3
4... 
5Installed:
6  kubeadm.x86_64 0:1.10.0-0    kubectl.x86_64 0:1.10.0-0    kubelet.x86_64 0:1.10.0-0
7
8Dependency Installed:
9  kubernetes-cni.x86_64 0:0.6.0-0   socat.x86_64 0:1.7.3.2-2.el7
  • 从安装结果可以看出还安装了kubernetes-cni和socat两个依赖: * 官方从Kubernetes 1.9开始将cni依赖升级到了0.6.0版本 * socat是kubelet的依赖

Kubernetes文档中kubelet的启动参数

1  --cgroup-driver string Driver that the kubelet uses to manipulate cgroups on the host.
2		Possible values: 'cgroupfs', 'systemd' (default "cgroupfs")

默认值为cgroupfs,但是我们注意到yum安装kubelet,kubeadm时生成10-kubeadm.conf文件中将这个参数值改成了systemd。

查看kubelet的 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf文件,其中包含如下内容:

1Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

使用docker info打印docker信息:

1docker info
2......
3Server Version: 17.03.2-ce
4......
5Cgroup Driver: cgroupfs

可以看出docker 17.03使用的Cgroup Driver为cgroupfs。

于是修改各节点docker的cgroup driver使其和kubelet一致,即修改或创建/etc/docker/daemon.json,加入下面的内容:

1{
2  "exec-opts": ["native.cgroupdriver=systemd"]
3}

重启docker:

1systemctl restart docker
2systemctl status docker

在各节点开机启动kubelet服务:

1systemctl enable kubelet.service

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。可以通过kubelet的启动参数--fail-swap-on=false更改这个限制。

关闭系统的Swap方法如下:

1swapoff -a

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

1vm.swappiness=0

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

因为这里本次用于测试两台主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的启动参数--fail-swap-on=false去掉这个限制。修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,加入:

1Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

使配置修改生效:

1systemctl daemon-reload

3.使用kubeadm init初始化集群

接下来使用kubeadm初始化集群,选择node1作为Master Node,在node1上执行下面的命令:

1kubeadm init \
2  --kubernetes-version=v1.10.0 \
3  --pod-network-cidr=10.244.0.0/16 \
4  --apiserver-advertise-address=192.168.61.11

因为我们选择flannel作为Pod网络插件,所以上面的命令指定–pod-network-cidr=10.244.0.0/16。

执行时报了下面的错误:

1[init] Using Kubernetes version: v1.10.0
2[init] Using Authorization modes: [Node RBAC]
3[preflight] Running pre-flight checks.
4        [WARNING FileExisting-crictl]: crictl not found in system path
5[preflight] Some fatal errors occurred:
6        [ERROR Swap]: running with swap on is not supported. Please disable swap
7[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

一个警告信息是crictl not found in system path,另一个错误信息是running with swap on is not supported. Please disable swap。因为我们前面已经修改了kubelet的启动参数,所以重新添加–ignore-preflight-errors=Swap参数忽略这个错误,重新运行。

 1kubeadm init \
 2   --kubernetes-version=v1.10.0 \
 3   --pod-network-cidr=10.244.0.0/16 \
 4   --apiserver-advertise-address=192.168.61.11 \
 5   --ignore-preflight-errors=Swap
 6
 7[init] Using Kubernetes version: v1.10.0
 8[init] Using Authorization modes: [Node RBAC]
 9[preflight] Running pre-flight checks.
10        [WARNING Swap]: running with swap on is not supported. Please disable swap
11        [WARNING FileExisting-crictl]: crictl not found in system path
12Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
13[preflight] Starting the kubelet service
14[certificates] Generated ca certificate and key.
15[certificates] Generated apiserver certificate and key.
16[certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.61.11]
17[certificates] Generated apiserver-kubelet-client certificate and key.
18[certificates] Generated etcd/ca certificate and key.
19[certificates] Generated etcd/server certificate and key.
20[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
21[certificates] Generated etcd/peer certificate and key.
22[certificates] etcd/peer serving cert is signed for DNS names [node1] and IPs [192.168.61.11]
23[certificates] Generated etcd/healthcheck-client certificate and key.
24[certificates] Generated apiserver-etcd-client certificate and key.
25[certificates] Generated sa key and public key.
26[certificates] Generated front-proxy-ca certificate and key.
27[certificates] Generated front-proxy-client certificate and key.
28[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
29[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
30[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
31[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
32[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
33[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
34[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
35[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
36[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
37[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
38[init] This might take a minute or longer if the control plane images have to be pulled.
39[apiclient] All control plane components are healthy after 221.004142 seconds
40[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
41[markmaster] Will mark node node1 as master by adding a label and a taint
42[markmaster] Master node1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
43[bootstraptoken] Using token: leaahe.ydaf5vnts83a9myp
44[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
45[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
46[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
47[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
48[addons] Applied essential addon: kube-dns
49[addons] Applied essential addon: kube-proxy
50
51Your Kubernetes master has initialized successfully!
52
53To start using your cluster, you need to run the following as a regular user:
54
55  mkdir -p $HOME/.kube
56  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
57  sudo chown $(id -u):$(id -g) $HOME/.kube/config
58
59You should now deploy a pod network to the cluster.
60Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
61  https://kubernetes.io/docs/concepts/cluster-administration/addons/
62
63You can now join any number of machines by running the following on each node
64as root:
65
66  kubeadm join 192.168.61.11:6443 --token leaahe.ydaf5vnts83a9myp --discovery-token-ca-cert-hash sha256:6b2761d20f115c4e22cc14788a78e1691c13cf42f6d573ae8a8f3efbed6da60f

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。

其中有以下关键内容:

  • [certificates]生成相关的各种证书
  • [kubeconfig]接下来是生成证书和相关的kubeconfig文件,这个目前我们在Kubernetes 1.6 高可用集群部署也是这么做的,目前没看出有什么新东西
  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • 下面的命令是配置常规用户如何使用kubectl访问集群:
    1mkdir -p $HOME/.kube
    2sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    3sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  • 最后给出了将节点加入集群的命令kubeadm join 192.168.61.11:6443 --token leaahe.ydaf5vnts83a9myp --discovery-token-ca-cert-hash sha256:6b2761d20f115c4e22cc14788a78e1691c13cf42f6d573ae8a8f3efbed6da60f

查看一下集群状态:

1kubectl get cs
2NAME                 STATUS    MESSAGE              ERROR
3controller-manager   Healthy   ok
4scheduler            Healthy   ok
5etcd-0               Healthy   {"health": "true"}

确认个组件都处于healthy状态。

集群初始化如果遇到问题,可以使用下面的命令进行清理:

1kubeadm reset
2ifconfig cni0 down
3ip link delete cni0
4ifconfig flannel.1 down
5ip link delete flannel.1
6rm -rf /var/lib/cni/

4.安装Pod Network

接下来安装flannel network add-on:

 1mkdir -p ~/k8s/
 2cd ~/k8s
 3wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 4kubectl apply -f  kube-flannel.yml
 5
 6clusterrole.rbac.authorization.k8s.io "flannel" created
 7clusterrolebinding.rbac.authorization.k8s.io "flannel" created
 8serviceaccount "flannel" created
 9configmap "kube-flannel-cfg" created
10daemonset.extensions "kube-flannel-ds" created

这里注意kube-flannel.yml这个文件里的flannel的镜像是0.10.0,quay.io/coreos/flannel:v0.10.0-amd64

如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上--iface=<iface-name>

 1......
 2containers:
 3      - name: kube-flannel
 4        image: quay.io/coreos/flannel:v0.10.0-amd64
 5        command:
 6        - /opt/bin/flanneld
 7        args:
 8        - --ip-masq
 9        - --kube-subnet-mgr
10        - --iface=eth1
11......

使用kubectl get pod --all-namespaces -o wide确保所有的Pod都处于Running状态。

1kubectl get pod --all-namespaces -o wide
2NAMESPACE     NAME                            READY     STATUS    RESTARTS   AGE       IP              NODE
3kube-system   etcd-node1                      1/1       Running   0          1m        192.168.61.11   node1
4kube-system   kube-apiserver-node1            1/1       Running   0          1m        192.168.61.11   node1
5kube-system   kube-controller-manager-node1   1/1       Running   0          2m        192.168.61.11   node1
6kube-system   kube-dns-86f4d74b45-mw5n7       3/3       Running   0          2m        10.244.0.2      node1
7kube-system   kube-flannel-ds-vbbvj           1/1       Running   0          1m        192.168.61.11   node1
8kube-system   kube-proxy-z9ngd                1/1       Running   0          2m        192.168.61.11   node1
9kube-system   kube-scheduler-node1            1/1       Running   0          1m        192.168.61.11   node1

5.master node参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。

这里搭建的是测试环境可以使用下面的命令使Master Node参与工作负载:

1kubectl taint nodes node1 node-role.kubernetes.io/master-
2node "node1" untainted

6.测试DNS

1kubectl run curl --image=radial/busyboxplus:curl -i --tty
2If you don't see a command prompt, try pressing enter.
3[ root@curl-2716574283-xr8zd:/ ]$

进入后执行nslookup kubernetes.default确认解析正常:

1nslookup kubernetes.default
2Server:    10.96.0.10
3Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
4
5Name:      kubernetes.default
6Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

7.向Kubernetes集群添加Node

下面我们将node2这个主机添加到Kubernetes集群中,因为我们同样在node2上的kubelet的启动参数中去掉了必须关闭swap的限制,所以同样需要--ignore-preflight-errors=Swap这个参数。 在node2上执行:

 1  kubeadm join 192.168.61.11:6443 --token leaahe.ydaf5vnts83a9myp --discovery-token-ca-cert-hash sha256:6b2761d20f115c4e22cc14788a78e1691c13cf42f6d573ae8a8f3efbed6da60f \
 2 --ignore-preflight-errors=Swap
 3
 4[preflight] Running pre-flight checks.
 5        [WARNING Swap]: running with swap on is not supported. Please disable swap
 6        [WARNING FileExisting-crictl]: crictl not found in system path
 7Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
 8[preflight] Starting the kubelet service
 9[discovery] Trying to connect to API Server "192.168.61.11:6443"
10[discovery] Created cluster-info discovery client, requesting info from "https://192.168.61.11:6443"
11[discovery] Requesting info from "https://192.168.61.11:6443" again to validate TLS against the pinned public key
12[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.61.11:6443"
13[discovery] Successfully established connection with API Server "192.168.61.11:6443"
14
15This node has joined the cluster:
16* Certificate signing request was sent to master and a response
17  was received.
18* The Kubelet was informed of the new secure connection details.
19
20Run 'kubectl get nodes' on the master to see this node join the cluster.

node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:

1kubectl get nodes
2NAME      STATUS    ROLES     AGE       VERSION
3node1     Ready     master    26m       v1.9.0
4node2     Ready     <none>    2m        v1.9.0

如何从集群中移除Node

如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

1kubectl drain node2 --delete-local-data --force --ignore-daemonsets
2kubectl delete node node2

在node2上执行:

1kubeadm reset
2ifconfig cni0 down
3ip link delete cni0
4ifconfig flannel.1 down
5ip link delete flannel.1
6rm -rf /var/lib/cni/

8.dashboard插件部署

注意当前dashboard的版本已经是1.8.3了。

另外需要注意dashboard调整了部署文件的源码目录结构:

1mkdir -p ~/k8s/
2cd ~/k8s
3wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml
4kubectl create -f kubernetes-dashboard.yaml

修改kubernetes-dashboard.yaml中Service的type为NodePort便于从集群外部访问dashboard

kubernetes-dashboard.yaml文件中的ServiceAccount kubernetes-dashboard只有相对较小的权限,因此我们创建一个kubernetes-dashboard-admin的ServiceAccount并授予集群admin的权限,创建kubernetes-dashboard-admin.rbac.yaml:

 1---
 2apiVersion: v1
 3kind: ServiceAccount
 4metadata:
 5  labels:
 6    k8s-app: kubernetes-dashboard
 7  name: kubernetes-dashboard-admin
 8  namespace: kube-system
 9  
10---
11apiVersion: rbac.authorization.k8s.io/v1beta1
12kind: ClusterRoleBinding
13metadata:
14  name: kubernetes-dashboard-admin
15  labels:
16    k8s-app: kubernetes-dashboard
17roleRef:
18  apiGroup: rbac.authorization.k8s.io
19  kind: ClusterRole
20  name: cluster-admin
21subjects:
22- kind: ServiceAccount
23  name: kubernetes-dashboard-admin
24  namespace: kube-system
1kubectl create -f kubernetes-dashboard-admin.rbac.yaml
2serviceaccount "kubernetes-dashboard-admin" created
3clusterrolebinding "kubernetes-dashboard-admin" created

查看kubernete-dashboard-admin的token:

 1kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
 2kubernetes-dashboard-admin-token-pfss5   kubernetes.io/service-account-token   3         14s
 3
 4 kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-pfss5
 5Name:         kubernetes-dashboard-admin-token-pfss5
 6Namespace:    kube-system
 7Labels:       <none>
 8Annotations:  kubernetes.io/service-account.name=kubernetes-dashboard-admin
 9              kubernetes.io/service-account.uid=1029250a-ad76-11e7-9a1d-08002778b8a1
10
11Type:  kubernetes.io/service-account-token
12
13Data
14====
15ca.crt:     1025 bytes
16namespace:  11 bytes
17token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1wZnNzNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjEwMjkyNTBhLWFkNzYtMTFlNy05YTFkLTA4MDAyNzc4YjhhMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.Bs6h65aFCFkEKBO_h4muoIK3XdTcfik-pNM351VogBJD_pk5grM1PEWdsCXpR45r8zUOTpGM-h8kDwgOXwy2i8a5RjbUTzD3OQbPJXqa1wBk0ABkmqTuw-3PWMRg_Du8zuFEPdKDFQyWxiYhUi_v638G-R5RdZD_xeJAXmKyPkB3VsqWVegoIVTaNboYkw6cgvMa-4b7IjoN9T1fFlWCTZI8BFXbM8ICOoYMsOIJr3tVFf7d6oVNGYqaCk42QL_2TfB6xMKLYER9XDh753-_FDVE5ENtY5YagD3T_s44o0Ewara4P9C3hYRKdJNLxv7qDbwPl3bVFH3HXbsSxxF3TQ

在dashboard的登录窗口使用上面的token登录。

9.heapster插件部署

下面安装Heapster为集群添加使用统计和监控功能,为Dashboard添加仪表盘。 使用InfluxDB做为Heapster的后端存储,开始部署:

1mkdir -p ~/k8s/heapster
2cd ~/k8s/heapster
3wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
4wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
5wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
6wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
7
8kubectl create -f ./

最后确认所有的pod都处于running状态,打开Dashboard,集群的使用统计会以仪表盘的形式显示出来。

本次安装涉及到的Docker镜像:

 1k8s.gcr.io/kube-proxy-amd64:v1.10.0
 2k8s.gcr.io/kube-scheduler-amd64:v1.10.0
 3k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
 4k8s.gcr.io/kube-apiserver-amd64:v1.10.0
 5k8s.gcr.io/etcd-amd64:3.1.12
 6quay.io/coreos/flannel:v0.10.0-amd64
 7k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
 8k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
 9k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
10k8s.gcr.io/pause-amd64:3.1
11
12k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
13k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
14k8s.gcr.io/heapster-grafana-amd64:v4.4.3
15k8s.gcr.io/heapster-amd64:v1.4.2

参考