前面已经完成了rootless模式下buildkitd的二进制部署,本节将buildkitd部署到k8s集群中。 具体的环境信息k8s集群版本1.20.7,服务器系统CentOS 7.9,Jenkins Master通过使用kubernetes-plugin使运行job的jenkins slave(pod)在k8s集群上的jenkins namespace内动态创建。

部署buildkitd到k8s集群

在部署之前需要确保各个k8s个服务节点内核参数都设置了user.max_user_namespaces=28633,以支持rootless模式运行容器。 buildkit源码的examples/kubernetes目录中已经给出了以各种形式在k8s上部署buildkit的示例yaml文件。这里选择的是deployment+service.rootless.yaml

还是需要先使用cfssl工具生成服务端和客户端证书,生成证书的过程这里省略。因为在k8s集群内部jenkins slave pod中的buildctl将会访问buildkitd的service,需要注意把buildkitd的service name写到生成的服务端证书的san中:

 1"sans": [
 2    "buildkitd.jenkins.svc.cluster.local",
 3    "buildkitd.jenkins.svc.cluster",
 4    "buildkitd.jenkins.svc",
 5    "buildkitd.jenkins",
 6    "buildkitd",
 7    "127.0.0.1",
 8    "::1",
 9    "::"
10  ]

生成的证书如下:

1├── client
2│   ├── ca.pem
3│   ├── cert.pem
4│   └── key.pem
5└── daemon
6    ├── ca.pem
7    ├── cert.pem
8    └── key.pem

基于上面的证书生成在k8s中保存证书的secret的yaml文件buildkit-daemon-certs-secret.yamlbuildkit-client-certs-secret.yaml:

 1kubectl create secret generic buildkit-daemon-certs \
 2    -n jenkins \
 3    --dry-run=client -o yaml \
 4    --from-file=./daemon \
 5    > buildkit-daemon-certs-secret.yaml
 6
 7kubectl create secret generic buildkit-client-certs \
 8    -n jenkins \
 9    --dry-run=client -o yaml \
10    --from-file=./client \
11    > buildkit-client-certs-secret.yaml

buildctl构建镜像时需要访问的私有镜像仓库的secret的yaml文件buildkit-client-harbor-secret.yaml

1kubectl create secret docker-registry buildkit-client-registry-secret \
2  -n jenkins \
3  --dry-run=client -o yaml \
4  --docker-server=harbor.myorg.com \
5  --docker-username=username \
6  --docker-password=password \
7  > buildkit-client-registry-secret.yaml

参考deployment+service.rootless.yaml](https://github.com/moby/buildkit/blob/master/examples/kubernetes/deployment%2Bservice.rootless.yaml)编写buildkit-deploy.yamlbuildkit-service.yaml

 1---
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  labels:
 6    app: buildkitd
 7  name: buildkitd
 8  namespace: jenkins
 9spec:
10  replicas: 1
11  selector:
12    matchLabels:
13      app: buildkitd
14  template:
15    metadata:
16      labels:
17        app: buildkitd
18      annotations:
19        container.apparmor.security.beta.kubernetes.io/buildkitd: unconfined
20        container.seccomp.security.alpha.kubernetes.io/buildkitd: unconfined
21    # see buildkit/docs/rootless.md for caveats of rootless mode
22    spec:
23      containers:
24        - name: buildkitd
25          image: moby/buildkit:v0.8.3-rootless
26          args:
27            - --addr
28            - unix:///run/user/1000/buildkit/buildkitd.sock
29            - --addr
30            - tcp://0.0.0.0:1234
31            - --tlscacert
32            - /certs/ca.pem
33            - --tlscert
34            - /certs/cert.pem
35            - --tlskey
36            - /certs/key.pem
37            - --oci-worker-no-process-sandbox
38          # the probe below will only work after Release v0.6.3
39          readinessProbe:
40            exec:
41              command:
42                - buildctl
43                - debug
44                - workers
45            initialDelaySeconds: 5
46            periodSeconds: 30
47          # the probe below will only work after Release v0.6.3
48          livenessProbe:
49            exec:
50              command:
51                - buildctl
52                - debug
53                - workers
54            initialDelaySeconds: 5
55            periodSeconds: 30
56          securityContext:
57            # To change UID/GID, you need to rebuild the image
58            runAsUser: 1000
59            runAsGroup: 1000
60          ports:
61            - containerPort: 1234
62          volumeMounts:
63            - name: certs
64              readOnly: true
65              mountPath: /certs
66      volumes:
67        # buildkit-daemon-certs must contain ca.pem, cert.pem, and key.pem
68        - name: certs
69          secret:
70            secretName: buildkit-daemon-certs
 1---
 2apiVersion: v1
 3kind: Service
 4metadata:
 5  labels:
 6    app: buildkitd
 7  name: buildkitd
 8  namespace: jenkins
 9spec:
10  ports:
11    - port: 1234
12      protocol: TCP
13  selector:
14    app: buildkitd

使用kubectl在k8s集群中部署上面创建个各个资源文件:

 1kubectl apply -f buildkit-client-harbor-secret.yaml
 2kubectl apply -f buildkit-client-certs-secret.yaml
 3kubectl apply -f buildkit-daemon-certs-secret.yaml
 4kubectl apply -f buildkit-deploy.yaml
 5kubectl apply -f buildkit-service.yaml
 6
 7kubectl get deploy,svc -n jenkins
 8NAME                        READY   UP-TO-DATE   AVAILABLE 
 9deployment.apps/buildkitd   1/1     1            1         
10NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)
11service/buildkitd   ClusterIP   10.102.111.220   <none>        1234/TCP

jenkins slave pod内的buildctl访问k8s集群中的buildkitd

jenkins slave镜像中已经安装的buildctl,另外还需在jenkins master的Configure Clouds中的Kubernetes的Pod Temlate中配置将 buildkit-client-certs secret中的证书文件挂载到jenkins slave pod中的$HOME/.buildctl/certs目录下.将buildkit-client-registry-secret中的.dockerconfigjson文件挂载到jenkins slave pod中的$HOME/.buildctl/secret下

在jenkins slave pod中使用buildctl访问buildkitd:

1buildctl \
2  --addr tcp://buildkitd:1234 \
3  --tlscacert=$HOME/.buildctl/certs/ca.pem \
4  --tlscert=$HOME/.buildctl/certs/cert.pem \
5  --tlskey=$HOME/.buildctl/certs/key.pem \
6debug workers
7
8ID                              PLATFORMS
9kng080vtmbkssq2jdpqx4t7vv       linux/amd64,linux/386

测试镜像构建:

 1mkdir /tmp/myproject
 2echo "FROM alpine" > /tmp/myproject/Dockerfile
 3
 4mkdir -p ~/.docker
 5cp ~/.buildctl/secret/.dockerconfigjson ~/.docker/config.json
 6
 7buildctl \
 8  --addr tcp://buildkitd:1234 \
 9  --tlscacert=$HOME/.buildctl/certs/ca.pem \
10  --tlscert=$HOME/.buildctl/certs/cert.pem \
11  --tlskey=$HOME/.buildctl/certs/key.pem \
12  build   \
13  --frontend dockerfile.v0  \
14  --local context=/tmp/myproject   \
15  --local dockerfile=/tmp/myproject \
16  --output type=image,name=harbor.myorg.com/myproject/myimg:1.0,push=true
17
18rm -f ~/.docker/config.json

可以看出当把buildkitd部署到k8s集群后,k8s集群上的Jenkins Salve Pod只需要单独使用buildctl这个命令行工具就可以与其通信完成镜像构建工作,不再依赖于Docker Daemon,也不需要再使用Docker outside Docker。

参考