The RBD scheme is similar to the Scheme: iSCSI in structure, as both are network block device attachment schemes. In our testing, RBD + SquashFS + Overlay provided the most scalable results. Moreover, the added features of Ceph/RBD like replication and failover, make this a promising solution for large-scale, demanding clusters.
Note: in this example we use the admin account to attach the RBD for simplicity. You almost certainly want a more restricted account for this use.
We will demostrate using SquashFS here. Other filesystem types would follow by running mkfs.*
against the rbd device, then copying files over (or writing them in place).
-
Create image chroot
Follow some procedure that provides a subdirectory containing the image files, e.g. a
dnf --installroot=... ...
, or perhaps a container build and mount. -
Create the RBD object
rbd create --image-feature layering --image-shared -s 100G image1
-
Map the RBD object to the image creation machine
# rbd map --monitor 192.168.3.253 --pool rbd --image iamge1 --id admin --secret XXXXXXXXXXXX /dev/rbd0
-
Write the image
From within the chroot directory, build the squashfs directly to the rbd device:
mksquashfs . /dev/rbd0 -noI -noX -noappend
-
Cleanup attachment
rbd unmap -d 0
The image is now ready to use.
{
"name": "test",
"command": "/usr/lib/systemd/systemd",
"mount": {
"kind": "overlay",
"overlay": {
"lower": [
{
"kind": "attach",
"attach": {
"kind": "rbd",
"fs_type": "squashfs",
"mount_options": [
"ro"
],
"attach": {
"kind": "rbd",
"rbd": {
"image": "image1",
"monitors": [
"192.168.3.253"
],
"options": {
"name": "admin",
"ro": true,
"secret": "XXXXXXXXXXXX"
},
"pool": "rbd"
}
}
}
}
]
}
},
"state": "running",
"systemd": true
}