重学容器09: Containerd是如何存储容器镜像和数据的

重学容器09: Containerd是如何存储容器镜像和数据的

2021-06-02
Containerd

前面我们简单理解了Containerd的架构,本节来看一下Containerd是如何存储镜像和容器的,涉及到内容包括如何镜像存储和RootFS。

从pull镜像到启动容器 #

Containerd的配置文件中有如下两项配置:

1root = /var/lib/containerd
2state = "/run/containerd"

root配置的目录是用来保存持久化数据的目录,包括content, snapshot, metadataruntime。 下面在一台测试的服务器上,删除所有的镜像和容器后,再执行下面的命令重新初始化一下这些目录,以准备后边的实验。

1systemctl stop containerd
2rm -rf /var/lib/containerd/*
3rm -rf /var/lib/nerdctl/*
4systemctl start containerd

完成测试环境初始化后,重新查看/var/lib/containerd目录:

 1tree /var/lib/containerd/ -L 2
 2/var/lib/containerd/
 3├── io.containerd.content.v1.content
 4   └── ingest
 5├── io.containerd.metadata.v1.bolt
 6   └── meta.db
 7├── io.containerd.runtime.v1.linux
 8├── io.containerd.runtime.v2.task
 9├── io.containerd.snapshotter.v1.btrfs
10├── io.containerd.snapshotter.v1.native
11   └── snapshots
12├── io.containerd.snapshotter.v1.overlayfs
13   └── snapshots
14└── tmpmounts

/var/lib/containerd的各个子目录很清晰的对应到了content, snapshot, metadataruntime,和containerd的架构示意图中的子系统和组件上:

containerd-architecture.png

/var/lib/containerd下各个子目录的名称也可以对应到使用ctr plugin ls查看打印的部分插件名称,实际上这些目录是Containerd的插件用于保存数据的目录,每个插件都可以有自己单独的数据目录,Containerd本身不存储数据,它的所有功能都是通过插件实现的。

下面按照下面containerd的数据流图,依次执行从pull镜像、启动容器、在容器中创建一个文件步骤,并观察containerd的数据目录的变化。

containerd-data-flow.png

先pull一个镜像:

 1nerdctl pull redis:alpine3.13
 2docker.io/library/redis:alpine3.13:                                               resolved       |++++++++++++++++++++++++++++++++++++++|
 3index-sha256:eaaa58f8757d6f04b2e34ace57a71d79f8468053c198f5758fd2068ac235f303:    done           |++++++++++++++++++++++++++++++++++++++|
 4manifest-sha256:b7cb70118c9729f8dc019187a4411980418a87e6a837f4846e87130df379e2c8: done           |++++++++++++++++++++++++++++++++++++++|
 5config-sha256:1690b63e207f6651429bebd716ace700be29d0110a0cfefff5038bb2a7fb6fc7:   done           |++++++++++++++++++++++++++++++++++++++|
 6layer-sha256:6ab1d05b49730290d3c287ccd34640610423d198e84552a4c2a4e98a46680cfd:    done           |++++++++++++++++++++++++++++++++++++++|
 7layer-sha256:8cc52074f78e0a2fd174bdd470029cf287b7366bf1b8d3c1f92e2aa8789b92ae:    done           |++++++++++++++++++++++++++++++++++++++|
 8layer-sha256:aa7854465cce07929842cb49fc92f659de8a559cf521fc7ea8e1b781606b85cd:    done           |++++++++++++++++++++++++++++++++++++++|
 9layer-sha256:8173c12df40f1578a7b2dfbbc0034a4fbc8ec7c870fd32b9236c2e5e1936616a:    done           |++++++++++++++++++++++++++++++++++++++|
10layer-sha256:540db60ca9383eac9e418f78490994d0af424aab7bf6d0e47ac8ed4e2e9bcbba:    done           |++++++++++++++++++++++++++++++++++++++|
11layer-sha256:29712d301e8c43bcd4a36da8a8297d5ff7f68c3d4c3f7113244ff03675fa5e9c:    done           |++++++++++++++++++++++++++++++++++++++|
12elapsed: 16.4s                                                                    total:  7.7 Mi (481.5 KiB/s)

从上面命令的执行过程来看总共pull了1个index, 1个config, 1个manifest和6个layer。

io.containerd.metadata.v1.bolt/meta.db是boltdb文件,存储了对images和bundles的持久化引用。 boltdb是一个嵌入式的key/value数据库,containerd的源码文件https://github.com/containerd/containerd/blob/master/metadata/buckets.go头部的注释描述了db schema数据结构。

 1// keys.
 2//  ├──version : <varint>                        - Latest version, see migrations
 3//  └──v1                                        - Schema version bucket
 4//     ╘══*namespace*
 5//        ├──labels
 6//          ╘══*key* : <string>                 - Label value
 7//        ├──image
 8//          ╘══*image name*
 9//             ├──createdat : <binary time>     - Created at
10//             ├──updatedat : <binary time>     - Updated at
11//             ├──target
12//               ├──digest : <digest>          - Descriptor digest
13//               ├──mediatype : <string>       - Descriptor media type
14//               └──size : <varint>            - Descriptor size
15//             └──labels
16//                ╘══*key* : <string>           - Label value
17//        ├──containers
18//          ╘══*container id*
19//             ├──createdat : <binary time>     - Created at
20//             ├──updatedat : <binary time>     - Updated at
21//             ├──spec : <binary>               - Proto marshaled spec
22//             ├──image : <string>              - Image name
23//             ├──snapshotter : <string>        - Snapshotter name
24//             ├──snapshotKey : <string>        - Snapshot key
25//             ├──runtime
26//               ├──name : <string>            - Runtime name
27//               ├──extensions
28//                 ╘══*name* : <binary>       - Proto marshaled extension
29//               └──options : <binary>         - Proto marshaled options
30//             └──labels
31//                ╘══*key* : <string>           - Label value
32//        ├──snapshots
33//          ╘══*snapshotter*
34//             ╘══*snapshot key*
35//                ├──name : <string>            - Snapshot name in backend
36//                ├──createdat : <binary time>  - Created at
37//                ├──updatedat : <binary time>  - Updated at
38//                ├──parent : <string>          - Parent snapshot name
39//                ├──children
40//                  ╘══*snapshot key* : <nil>  - Child snapshot reference
41//                └──labels
42//                   ╘══*key* : <string>        - Label value
43//        ├──content
44//          ├──blob
45//            ╘══*blob digest*
46//               ├──createdat : <binary time>  - Created at
47//               ├──updatedat : <binary time>  - Updated at
48//               ├──size : <varint>            - Blob size
49//               └──labels
50//                  ╘══*key* : <string>        - Label value
51//          └──ingests
52//             ╘══*ingest reference*
53//                ├──ref : <string>             - Ingest reference in backend
54//                ├──expireat : <binary time>   - Time to expire ingest
55//                └──expected : <digest>        - Expected commit digest
56//        └──leases
57//           ╘══*lease id*
58//              ├──createdat : <binary time>     - Created at
59//              ├──labels
60//                ╘══*key* : <string>           - Label value
61//              ├──snapshots
62//                ╘══*snapshotter*
63//                   ╘══*snapshot key* : <nil>  - Snapshot reference
64//              ├──content
65//                ╘══*blob digest* : <nil>      - Content blob reference
66//              └──ingests
67//                 ╘══*ingest reference* : <nil> - Content ingest reference

可以看出主要记录了关于image, content, snapshots, containers的元数据。这里编写了一个简单的go程序读取打印一下当前boltdb中的内容,注意上面结构描述中的一些binary类型会打印成乱码。

 1package main
 2
 3import (
 4	"fmt"
 5	"log"
 6
 7	bolt "go.etcd.io/bbolt"
 8)
 9func main() {
10	{
11		db, err := bolt.Open("/var/lib/continaerd/io.containerd.metadata.v1.bolt/meta.db", 0666, nil)
12		if err != nil {
13			log.Fatal(err)
14		}
15		defer db.Close()
16		db.View(func(tx *bolt.Tx) error {
17			b := tx.Bucket([]byte("v1")).Bucket([]byte("default"))
18			space := ""
19			travelBucket(b, space)
20			return nil
21		})
22	}
23
24}
25func travelBucket(b *bolt.Bucket, space string) {
26	space = space + "\t"
27	b.ForEach(func(k, v []byte) error {
28		if v == nil {
29			fmt.Printf("%sbucket=%s: \n", space, k)
30			travelBucket(b.Bucket([]byte(k)), space)
31		} else {
32			fmt.Printf("%skey=%s, value=%s\n", space, k, v)
33		}
34		return nil
35	})
36
37}

到这里只需明白一点,meta.db中存储的是各个存储的元数据。那么实际pull的镜像被存储到哪了呢? 镜像内容被存储到了io.containerd.content.v1.content/blobs/sha256中:

 1ll io.containerd.content.v1.content/blobs/sha256
 2total 10668
 31690b63e207f6651429bebd716ace700be29d0110a0cfefff5038bb2a7fb6fc7
 429712d301e8c43bcd4a36da8a8297d5ff7f68c3d4c3f7113244ff03675fa5e9c
 5540db60ca9383eac9e418f78490994d0af424aab7bf6d0e47ac8ed4e2e9bcbba
 66ab1d05b49730290d3c287ccd34640610423d198e84552a4c2a4e98a46680cfd
 78173c12df40f1578a7b2dfbbc0034a4fbc8ec7c870fd32b9236c2e5e1936616a
 88cc52074f78e0a2fd174bdd470029cf287b7366bf1b8d3c1f92e2aa8789b92ae
 9aa7854465cce07929842cb49fc92f659de8a559cf521fc7ea8e1b781606b85cd
10b7cb70118c9729f8dc019187a4411980418a87e6a837f4846e87130df379e2c8
11eaaa58f8757d6f04b2e34ace57a71d79f8468053c198f5758fd2068ac235f303

上面的9个文件,正好对应1个index文件, 1个config, 1个manifest文件和6个layer文件。index和manifest可以直接用cat命令查看,layer文件可以用tar解压缩。因此content中保存的是config, manifest, tar文件是OCI镜像标准的那套东西。

而实际上containerd也确实是将这些content中的tar解压缩到snapshot中。

查看/var/lib/containerd目录:

 1tree /var/lib/containerd/ -L 3
 2/var/lib/containerd/
 3├── io.containerd.content.v1.content
 4   ├── blobs
 5      └── sha256
 6   └── ingest
 7├── io.containerd.metadata.v1.bolt
 8   └── meta.db
 9├── io.containerd.runtime.v1.linux
10├── io.containerd.runtime.v2.task
11├── io.containerd.snapshotter.v1.btrfs
12├── io.containerd.snapshotter.v1.native
13   └── snapshots
14├── io.containerd.snapshotter.v1.overlayfs
15   ├── metadata.db
16   └── snapshots
17       ├── 1
18       ├── 2
19       ├── 3
20       ├── 4
21       ├── 5
22       └── 6
23└── tmpmounts

可以看到io.containerd.snapshotter.v1.overlayfs/snapshots中多了名称为1~6的6个子目录,查看这6个目录:

 1tree /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/ -L 3
 2/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/
 3├── 1
 4   ├── fs
 5      ├── bin
 6      ├── dev
 7      ├── etc
 8      ├── home
 9      ├── lib
10      ├── media
11      ├── mnt
12      ├── opt
13      ├── proc
14      ├── root
15      ├── run
16      ├── sbin
17      ├── srv
18      ├── sys
19      ├── tmp
20      ├── usr
21      └── var
22   └── work
23├── 2
24   ├── fs
25      ├── etc
26      └── home
27   └── work
28├── 3
29   ├── fs
30      ├── etc
31      ├── lib
32      ├── sbin
33      ├── usr
34      └── var
35   └── work
36├── 4
37   ├── fs
38      ├── bin
39      ├── etc
40      ├── lib
41      ├── tmp
42      ├── usr
43      └── var
44   └── work
45├── 5
46   ├── fs
47      └── data
48   └── work
49└── 6
50    ├── fs
51       └── usr
52    └── work

containerd的snapshotter的主要作用就是通过mount各个层为容器准备rootfs。containerd默认配置的snapshotter是overlayfs,overlayfs是联合文件系统的一种实现。 overlayfs将只读的镜像层成为lowerdir,将读写的容器层成为upperdir,最后联合挂载呈现出mergedir。

下面启动一个redis容器:

1nerdctl run -d --name redis redis:alpine3.13

可以看到io.containerd.snapshotter.v1.overlayfs/snapshots中多了名称为7的目录:

 1tree /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/ -L 2
 2/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/
 3├── 1
 4   ├── fs
 5   └── work
 6├── 2
 7   ├── fs
 8   └── work
 9├── 3
10   ├── fs
11   └── work
12├── 4
13   ├── fs
14   └── work
15├── 5
16   ├── fs
17   └── work
18├── 6
19   ├── fs
20   └── work
21└── 7
22    ├── fs
23    └── work

可以使用mount命令查看容器挂载的overlayfs的RootFS:

1mount | grep /var/lib/containerd
2overlay on /run/containerd/io.containerd.runtime.v2.task/default/8102f7fbee26792830e54e80b3488714ac559e092c59beb2e311cf8e88f475d6/rootfs type overlay (rw,relatime,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/6/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/5/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/4/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/2/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs,upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/7/fs,workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/7/work)

可以看出: snapshots/6/fs, snapshots/5/fssnapshots/1/fs为lowerdir,snapshots/7/fs为upperdir。 最终联合挂载合并呈现的目录为/run/containerd/io.containerd.runtime.v2.task/default/8102f7fbee26792830e54e80b3488714ac559e092c59beb2e311cf8e88f475d6/rootfs即为容器的rootfs,ls查看这个目录可以看到一个典型的linux系统目录结构:

1ls /run/containerd/io.containerd.runtime.v2.task/default/8102f7fbee26792830e54e80b3488714ac559e092c59beb2e311cf8e88f475d6/rootfs
2bin  data  dev  etc  home  lib  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

我们exec到容器中去,并在/root目录中创建一个hello文件:

1nerdctl exec -it redis sh
2echo hello > /root/hello

在宿主机上的upperdir中可以找到这个文件。在容器中对文件系统做的改动都会体现在upperdir中:

1ls /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/7/fs/root/
2hello

containerd的Snapshotter #

最后来看一下containerd的Snapshot组件。Snapshot为containerd实现了Snapshotter用于管理文件系统上容器镜像的快照和容器的rootfs挂载和卸载等操作功能。 snapshotter对标Docker中的graphdriver存储驱动的设计。contaienrd在设计上使用snapshotter新模式取代了docker中的graphdriver,其核心开发人员也在其博客中 Where are containerd’s graph drivers?中介绍了为什么要这么做。

参考 #

© 2024 青蛙小白
comments powered by Disqus