Skip to content

Reduce memory footprint by removing unnecessary cloud provider plugins at runtime #78271

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
frolickingferret445 opened this issue May 24, 2019 · 15 comments
Assignees
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.

Comments

@frolickingferret445
Copy link

frolickingferret445 commented May 24, 2019

I am interested in reducing the memory footprint of kubernetes if possible. From what I gather, by default, kubernetes runs with plugins for all cloud providers. I’ve looked into https://k3s.io/ and it mentions that it removes unnecessary in-tree cloud providers and storage drivers. However, it doesn’t offer other important features, so I’d prefer to stick to my existing installation process (with perhaps a couple of additional steps).

Is there a way to do this? (preferably with kubeadm)

I discovered a daemonset called cloud-controller-manager (beta feature) which pulls out some of the cloud-specific tasks from kube-controller-manager.

https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/

However, it still has a cloud-provider command-line flag. Does cloud-controller-manager reduce the memory footprint by only loading plugins for the necessary cloud provider?

I looked into using kubeadm to install cloud-controller-manager, but it seems that it isn’t yet supported, probably because it’s a beta feature. All of the controller-manager flags are for kube-controller-manager:

https://kubernetes.io/docs/setup/independent/control-plane-flags/

It sounds like I could get this working with a few extra steps similar to this :

https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md

Right?

Any help would be appreciated, thanks.

@kubernetes/sig-cloud-provider-maintainers

@frolickingferret445 frolickingferret445 added the kind/support Categorizes issue or PR as a support question. label May 24, 2019
@k8s-ci-robot
Copy link
Contributor

@frolickingferret445: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 24, 2019
@frolickingferret445
Copy link
Author

@sig-cloud-provider

@frolickingferret445
Copy link
Author

@kubernetes/sig-cloud-provider

@frolickingferret445
Copy link
Author

@kubernetes/sig-cloud-provider-maintainers

@frolickingferret445
Copy link
Author

/sig cloud-provider-maintainers

@vllry
Copy link
Contributor

vllry commented May 24, 2019

/assign @andrewsykim

@BenTheElder
Copy link
Member

also interested in small memory footprints 🙃
cc @cheftako re: cloud-controller-manager
cc @neolit123 re: kubeadm

@neolit123
Copy link
Member

I looked into using kubeadm to install cloud-controller-manager, but it seems that it isn’t yet supported, probably because it’s a beta feature. All of the controller-manager flags are for kube-controller-manager:

kubeadm is a node boostrapper. if kubernetes supports cloud-controller-manager as a DaemonSet then kubeadm supports it too.

except that we don't have a guide for that.

All of the controller-manager flags are for kube-controller-manager:
https://kubernetes.io/docs/setup/independent/control-plane-flags/

you still have to use the kubeadm config to pass extraArgs to the control-plane components.

It sounds like I could get this working with a few extra steps similar to this :
https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md

this guide runs the cloud-controller-manager as a static pod.

Does cloud-controller-manager reduce the memory footprint by only loading plugins for the necessary cloud provider?

will leave the memory footprint questions to sig-cloud-provider. :)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 23, 2019
@BenTheElder
Copy link
Member

you can exclude all of them at build time now, the CCM is the way forward though as all cloud providers are going out of tree

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 23, 2019
@cheftako
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 23, 2019
@cheftako
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Oct 23, 2019
@dims
Copy link
Member

dims commented Apr 12, 2020

tracked already kubernetes/enhancements#88 (comment)

/close

@k8s-ci-robot
Copy link
Contributor

@dims: Closing this issue.

In response to this:

tracked already kubernetes/enhancements#88 (comment)

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

9 participants