前面已经完成了rootless模式下buildkitd的二进制部署,本节将buildkitd部署到k8s集群中。 具体的环境信息k8s集群版本1.20.7,服务器系统CentOS 7.9,Jenkins Master通过使用kubernetes-plugin使运行job的jenkins slave(pod)在k8s集群上的jenkins namespace内动态创建。

部署buildkitd到k8s集群

在部署之前需要确保各个k8s个服务节点内核参数都设置了user.max_user_namespaces=28633,以支持rootless模式运行容器。 buildkit源码的examples/kubernetes目录中已经给出了以各种形式在k8s上部署buildkit的示例yaml文件。这里选择的是deployment+service.rootless.yaml

还是需要先使用cfssl工具生成服务端和客户端证书,生成证书的过程这里省略。因为在k8s集群内部jenkins slave pod中的buildctl将会访问buildkitd的service,需要注意把buildkitd的service name写到生成的服务端证书的san中:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
"sans": [
    "buildkitd.jenkins.svc.cluster.local",
    "buildkitd.jenkins.svc.cluster",
    "buildkitd.jenkins.svc",
    "buildkitd.jenkins",
    "buildkitd",
    "127.0.0.1",
    "::1",
    "::"
  ]

生成的证书如下:

1
2
3
4
5
6
7
8
├── client
│   ├── ca.pem
│   ├── cert.pem
│   └── key.pem
└── daemon
    ├── ca.pem
    ├── cert.pem
    └── key.pem

基于上面的证书生成在k8s中保存证书的secret的yaml文件buildkit-daemon-certs-secret.yamlbuildkit-client-certs-secret.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
kubectl create secret generic buildkit-daemon-certs \
    -n jenkins \
    --dry-run=client -o yaml \
    --from-file=./daemon \
    > buildkit-daemon-certs-secret.yaml

kubectl create secret generic buildkit-client-certs \
    -n jenkins \
    --dry-run=client -o yaml \
    --from-file=./client \
    > buildkit-client-certs-secret.yaml

buildctl构建镜像时需要访问的私有镜像仓库的secret的yaml文件buildkit-client-harbor-secret.yaml

1
2
3
4
5
6
7
kubectl create secret docker-registry buildkit-client-registry-secret \
  -n jenkins \
  --dry-run=client -o yaml \
  --docker-server=harbor.myorg.com \
  --docker-username=username \
  --docker-password=password \
  > buildkit-client-registry-secret.yaml

参考deployment+service.rootless.yaml](https://github.com/moby/buildkit/blob/master/examples/kubernetes/deployment%2Bservice.rootless.yaml)编写buildkit-deploy.yamlbuildkit-service.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: buildkitd
  name: buildkitd
  namespace: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: buildkitd
  template:
    metadata:
      labels:
        app: buildkitd
      annotations:
        container.apparmor.security.beta.kubernetes.io/buildkitd: unconfined
        container.seccomp.security.alpha.kubernetes.io/buildkitd: unconfined
    # see buildkit/docs/rootless.md for caveats of rootless mode
    spec:
      containers:
        - name: buildkitd
          image: moby/buildkit:v0.8.3-rootless
          args:
            - --addr
            - unix:///run/user/1000/buildkit/buildkitd.sock
            - --addr
            - tcp://0.0.0.0:1234
            - --tlscacert
            - /certs/ca.pem
            - --tlscert
            - /certs/cert.pem
            - --tlskey
            - /certs/key.pem
            - --oci-worker-no-process-sandbox
          # the probe below will only work after Release v0.6.3
          readinessProbe:
            exec:
              command:
                - buildctl
                - debug
                - workers
            initialDelaySeconds: 5
            periodSeconds: 30
          # the probe below will only work after Release v0.6.3
          livenessProbe:
            exec:
              command:
                - buildctl
                - debug
                - workers
            initialDelaySeconds: 5
            periodSeconds: 30
          securityContext:
            # To change UID/GID, you need to rebuild the image
            runAsUser: 1000
            runAsGroup: 1000
          ports:
            - containerPort: 1234
          volumeMounts:
            - name: certs
              readOnly: true
              mountPath: /certs
      volumes:
        # buildkit-daemon-certs must contain ca.pem, cert.pem, and key.pem
        - name: certs
          secret:
            secretName: buildkit-daemon-certs
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: buildkitd
  name: buildkitd
  namespace: jenkins
spec:
  ports:
    - port: 1234
      protocol: TCP
  selector:
    app: buildkitd

使用kubectl在k8s集群中部署上面创建个各个资源文件:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
kubectl apply -f buildkit-client-harbor-secret.yaml
kubectl apply -f buildkit-client-certs-secret.yaml
kubectl apply -f buildkit-daemon-certs-secret.yaml
kubectl apply -f buildkit-deploy.yaml
kubectl apply -f buildkit-service.yaml

kubectl get deploy,svc -n jenkins
NAME                        READY   UP-TO-DATE   AVAILABLE 
deployment.apps/buildkitd   1/1     1            1         
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)
service/buildkitd   ClusterIP   10.102.111.220   <none>        1234/TCP

jenkins slave pod内的buildctl访问k8s集群中的buildkitd

jenkins slave镜像中已经安装的buildctl,另外还需在jenkins master的Configure Clouds中的Kubernetes的Pod Temlate中配置将 buildkit-client-certs secret中的证书文件挂载到jenkins slave pod中的$HOME/.buildctl/certs目录下.将buildkit-client-registry-secret中的.dockerconfigjson文件挂载到jenkins slave pod中的$HOME/.buildctl/secret下

在jenkins slave pod中使用buildctl访问buildkitd:

1
2
3
4
5
6
7
8
9
buildctl \
  --addr tcp://buildkitd:1234 \
  --tlscacert=$HOME/.buildctl/certs/ca.pem \
  --tlscert=$HOME/.buildctl/certs/cert.pem \
  --tlskey=$HOME/.buildctl/certs/key.pem \
debug workers

ID                              PLATFORMS
kng080vtmbkssq2jdpqx4t7vv       linux/amd64,linux/386

测试镜像构建:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
mkdir /tmp/myproject
echo "FROM alpine" > /tmp/myproject/Dockerfile

mkdir -p ~/.docker
cp ~/.buildctl/secret/.dockerconfigjson ~/.docker/config.json

buildctl \
  --addr tcp://buildkitd:1234 \
  --tlscacert=$HOME/.buildctl/certs/ca.pem \
  --tlscert=$HOME/.buildctl/certs/cert.pem \
  --tlskey=$HOME/.buildctl/certs/key.pem \
  build   \
  --frontend dockerfile.v0  \
  --local context=/tmp/myproject   \
  --local dockerfile=/tmp/myproject \
  --output type=image,name=harbor.myorg.com/myproject/myimg:1.0,push=true

rm -f ~/.docker/config.json

可以看出当把buildkitd部署到k8s集群后,k8s集群上的Jenkins Salve Pod只需要单独使用buildctl这个命令行工具就可以与其通信完成镜像构建工作,不再依赖于Docker Daemon,也不需要再使用Docker outside Docker。

参考