【注意】最后更新于 June 6, 2018,文中内容可能已过时,请谨慎使用。
去年整理过一篇《Kubernetes Ingress实战》,经过这一年的发展Kubernetes的Ingress发生了很大的变化,原来的文章很多地方都不适用了。因此决定结合我们目前的使用情况重新写几篇Kubernetes Ingress相关的分享,内容是比较入门和初级的实操,请高手勿喷。
Ingress是一种Kubernetes资源,借助Nginx、Haproxy或云厂商的负载均衡器将Kubernetes集群内的Service暴露到集群外。因为我们的Kubernetes集群是采以Bare-metal部署的,所以这个系列将主要介绍ingress-nginx。
下面将ingress-nginx部署到用于测试的Kubernetes集群中,这里使用的Kubernetes的版本是1.9.8。
1.创建namespace
首先创建名称为ingress-nginx的namespace,ingress-nginx的相关组件的都将在这个namespace下。
namespace.yaml的内容如下:
1
2
3
4
5
6
|
---
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
|
1
|
kubectl create -f namespace.yaml
|
2.部署default backend
下面来创建default-http-backend
的Deployment和Service.
default-backend.yaml的内容如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
|
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
|
需要注意gcr.io/google_containers/defaultbackend:1.4
这个镜像在gcr上,由于某些已知的原因,最好将此镜像放到本地Kubernetes集群的私有镜像仓库中
1
2
3
|
kubectl apply -f default-backend.yaml
deployment "default-http-backend" created
service "default-http-backend" created
|
确认default-http-backend
的Service和Pod被创建,并且Pod处于Running状态。
1
2
3
4
5
6
7
8
9
|
kubectl get svc,deploy,pod -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/default-http-backend ClusterIP 10.111.47.113 <none> 80/TCP 1m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/default-http-backend 1 1 1 1 1m
NAME READY STATUS RESTARTS AGE
po/default-http-backend-64985b4bcb-9pn7s 1/1 Running 0 1m
|
default-http-backend
从名称上来看即默认的后端,当集群外部的请求通过ingress进入到集群内部时,如果无法负载到相关后端的Service上,这种未知的请求将会被负载到这个默认的后端上。
3.创建ConfigMap
接下来创建nginx-configuration
、tcp-services
、udp-services
三个ConfigMap:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
kubectl get deployment,pod -n ingress-nginx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/default-http-backend 1 1 1 1 1h
deploy/nginx-ingress-controller 2 2 2 2 2m
NAME READY STATUS RESTARTS AGE
po/default-http-backend-64985b4bcb-9pn7s 1/1 Running 0 1h
po/nginx-ingress-controller-7f9d776bff-5k28n 1/1 Running 0 2m
po/nginx-ingress-controller-7f9d776bff-rwls2 1/1 Running 0 2m
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
|
1
2
3
4
5
6
7
|
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
|
1
2
3
4
5
6
7
|
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
|
1
2
3
4
5
6
7
8
9
|
kubectl apply -f configmap.yaml
kubectl apply -f tcp-services-configmap.yaml
kubectl apply -f udp-services-configmap.yaml
kubectl get cm -n ingress-nginx
NAME DATA AGE
nginx-configuration 0 59s
tcp-services 0 51s
udp-services 0 45s
|
4.创建ServiceAccount并进行授权
接下来创建ServiceAccount nginx-ingress-serviceaccount
,并创建相关的ClusterRole, Role, ClusterRoleBinding, RoleBinding以对其进行授权:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
|
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
|
1
2
3
4
5
6
|
kubectl apply -f rbac.yaml
serviceaccount "nginx-ingress-serviceaccount" created
clusterrole "nginx-ingress-clusterrole" created
role "nginx-ingress-role" created
rolebinding "nginx-ingress-role-nisa-binding" created
clusterrolebinding "nginx-ingress-clusterrole-nisa-binding" created
|
5.部署nginx-ingress-controller
下面部署关键组件nginx-ingress-controller
这个ingress controller,前面提到Ingress是一种Kubernetes资源,借助Nginx、Haproxy或云厂商的负载均衡器将Kubernetes集群内的Service暴露到集群外
,nginx-ingress-controller
就是这个负载均衡器。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
|
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 2
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
- node2
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
runAsNonRoot: false
|
1
2
|
kubectl apply -f nginx-ingress-controller.yaml
deployment "nginx-ingress-controller" created
|
注意这里我们使用Kubernetes Pod调度的Node亲和性特性,将nginx-ingress-controller
的两个Pod实例固定调度到node1和node2两个节点上。
6.将nginx-ingress-controller暴露到Kubernetes集群外
再看前面提到的Ingress是一种Kubernetes资源,借助Nginx、Haproxy或云厂商的负载均衡器将Kubernetes集群内的Service暴露到集群外
,nginx-ingress-controller
就是这个负载均衡器。所以我们还需要创建为nginx-ingress-controller
创建一个Service并暴露到集群外边(ps: 这个有点先有鸡还是先有蛋的意思)。
因为我们的Kubernetes集群是采以Bare-metal部署的,这里使用ExternalIP的形式创建一个ingress-nginx
的Service,ingress-nginx.svc.yaml如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
externalIPs:
- 192.168.1.101
- 192.168.1.102
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app: ingress-nginx
|
其中192.168.1.101, 192.168.1.102分别对应node1和node2两个k8s node的ip。
1
2
|
kubectl apply -f ingress-nginx.svc.yaml
service "ingress-nginx" created
|
1
2
3
4
|
kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.111.47.113 <none> 80/TCP 1h
ingress-nginx ClusterIP 10.96.122.13 192.168.1.101,192.168.1.102 80/TCP,443/TCP 55s
|
可以看出通过192.168.1.101,192.168.1.102:80/443将nginx-ingress-controller
的nginx的80和443端口暴露到集群外边。
分别请求这两个ExternalIP:
1
2
3
4
5
|
curl 192.168.1.101:80
default backend - 404
curl 192.168.1.102:80
default backend - 404
|
因为我们还没有在Kubernetes集群中创建ingress资源,所以直接对这两个ExternalIP的请求被负载到了default backend上。
分别在node1和node2上查看:
1
2
3
4
5
|
netstat -nltp | grep kube-proxy | grep 80
tcp 0 0 192.168.1.101:80 0.0.0.0:* LISTEN 2748840/kube-proxy
netstat -nltp | grep kube-proxy | grep 80
tcp 0 0 192.168.1.102:80 0.0.0.0:* LISTEN 2989745/kube-proxy
|
可以看到ExternalIP的Service也是通过kube-proxy对外暴露的。这里的192.168.1.101和192.168.1.102是两个内网ip。
实际中需要将边缘路由器或全局统一接入层的负载均衡器将到达公网ip的外网流量转发到内网ip上,外部通过域名访问集群中将会以ingress暴露的所有服务。
至于如何创建ingress,将会在下一篇中介绍。
到此为止,我们已经将nginx-ingress-controller
部署到了Kubernetes集群中。
参考