Use systemd cgroup manager when kubelet/containerd are managed by systemd unit #4099
Labels
area/control-plane
Issues or PRs related to control-plane lifecycle management
area/upgrades
Issues or PRs related to upgrades
kind/feature
Categorizes issue or PR as related to a new feature.
lifecycle/active
Indicates that an issue or PR is actively being worked on by a contributor.
priority/important-soon
Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone
User Story
As a user, I would like to use systemd cgroup driver when kubelet/containerd are managed by systemd unit.
Detailed Description
Image builder is using systemd unit to run both kubelet and containerd; systemd allocates a cgroup per systemd unit, while instead both kubelet and containerd uses cgroupfs as a default cgroup driver.
As a result, there will be two different cgroup managers on each machine and this leads you to have two views of machines's resources; In the field, people have reported cases where such systems become unstable under resource pressure (see here for more context)
This is not ideal; instead, we should make sure everything is configured to use the systemd cgroup driver only.
In order to make this happen a coordinated, multi project effort is required:
Anything else you would like to add:
We should account for configurations not using containerd, not using systemd and for configuration not using kubeadm as a bootstrap/controlplane provider.
Given that both image builder and kubeadm allow overriding the default and that the KubeadmControlPlane is optional I don't see blockers for these scenarios, but if everyone has more context on these use cases, please comment.
This was discussed during CAPI office hours on the 20th of January.
/kind feature
The text was updated successfully, but these errors were encountered: