You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While hunting down what was causing heavy file system usage on my clusters I noticed something strange.
I've checked this on multiple okd clusters, busy ones and completely idle ones.
But the two 'downloads' pods in the openshift-console namespace are constantly generating a filesystem useage of 2.13G on all these clusters.
check the metrics on your system
https://<your.cluster.com>/monitoring/query-browser?query0=topk%2825%2C+sort_desc%28sum%28pod%3Acontainer_fs_usage_bytes%3Asum%7Bcontainer%3D%22%22%2Cpod%21%3D%22%22%7D%29+BY+%28pod%2C+namespace%29%29%29
These 2 pods have a really steady file system access of 2.13G and the are only running a simple python web server which serves the oc command for several architectures.
Why does this generate such a heavy load for such a simple task even when the cluster is doing nothing (like I said checked on all kinds of clusters)
The logs of those pods only show GET statements and not much more tnan that.
Can't this be improved to not constantly hit the filesystem with a 2.13G load ?
they load and store the oc binaries, which when multiplied by a couple of architectures likely add up to that ~2G filesystem utilization you're seeing.
You can review the source code if you'd like, i've glanced over it and it seems pretty sane to me.
It really is just temp space and will likely be (as you've observed) the same regardless of cluster size or activity just because ... that's how much space the oc binaries take up.
Hello,
While hunting down what was causing heavy file system usage on my clusters I noticed something strange.
I've checked this on multiple okd clusters, busy ones and completely idle ones.
But the two 'downloads' pods in the openshift-console namespace are constantly generating a filesystem useage of 2.13G on all these clusters.
check the metrics on your system
https://<your.cluster.com>/monitoring/query-browser?query0=topk%2825%2C+sort_desc%28sum%28pod%3Acontainer_fs_usage_bytes%3Asum%7Bcontainer%3D%22%22%2Cpod%21%3D%22%22%7D%29+BY+%28pod%2C+namespace%29%29%29
These 2 pods have a really steady file system access of 2.13G and the are only running a simple python web server which serves the oc command for several architectures.
Why does this generate such a heavy load for such a simple task even when the cluster is doing nothing (like I said checked on all kinds of clusters)
The logs of those pods only show GET statements and not much more tnan that.
Can't this be improved to not constantly hit the filesystem with a 2.13G load ?
Client Version: 4.17.0-okd-scos.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: 4.18.0-okd-scos.1
Kubernetes Version: v1.31.5-dirty
The text was updated successfully, but these errors were encountered: