You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are trying to limit the memory consumption of Minio pods, but of course also not want to see "random" OOM kills of the minio containers. Our intuitive approach was to set the common Kubernetes resource requests and limits for the memory inside the resources property of the Pool definition for a Tenant. Now if there's enough load on the cluster (and/or over time) we see the minio containers of the pods getting OOM killed.
Researching a little bit, we found certain issues trying to fiddle with GOMEMLIMIT and MINIO_MEMLIMIT (or --memlimit parameter) which is not (really) documented in the Minio documentation (another issue to be addressed?).
Our last attempt was to set the two environment variables from above which let the minio container fail on startup when specifying the MINIO_MEMLIMIT (either in numeric bytes or with alphanumeric MiB units). The only error logged was, before the container restarted in a loop:
minio FATAL Unable to prepare the list of endpoints: strconv.ParseFloat: parsing "": invalid syntax
We assume that even numeric values of environment variables are always strings.
On the other hand, even if this (manually configuring MINIO_MEMLIMIT variable) would solve our actual issue, we rather like to see it more convenient if the pure configuration of resources.limits.memory will apply automatically and properly to the minio process/runtime configuration as well.
We are not sure how and if it is actually intended to define a memory limit for the MinIO process itself. But setting memory limits in the resources section of a Pool definition is leading to OOM kills of the pod container. We'd rather see an automatic adoption of the GOMEMLIMIT and MINIO_MEMLIMIT
Current Behavior
No (documented) way to properly configure of memory limits of the actual minio process.
OOM kills of containers, when resource memory limits reached
Possible Solution
see: "Expected Behavior" above.
Steps to Reproduce (for bugs)
This is a minimal sample Tenant specification which is not working for us:
apiVersion: minio.min.io/v2kind: Tenantmetadata:
name: s3-testspec:
### ↓ Some hackish attempts to configure the mem limits of the `minio` process via environment variables according to some Github issues/discussions we also tried.# env:# - name: GOMEMLIMIT# value: "1073741824" # no impact (also with "1GiB")# - name: MINIO_OPTS# value: "--memlimit=1073741824" # also not working# - name: MINIO_MEMLIMIT# value: "1073741824" # will lead to "strconv.ParseFloat" error (also "1GiB")configuration:
name: s3-test-adminimage: quay.io/minio/minio:RELEASE.2025-01-18T00-31-37Zpools:
- name: testresources:
requests: { memory: 128Mi }limits: { memory: 128Mi }servers: 4volumesPerServer: 1volumeClaimTemplate:
metadata:
name: dataspec:
accessModes: ["ReadWriteOnce"]resources:
requests:
storage: 1GirequestAutoCert: falsesubPath: /databuckets: [ name: test ]users: [ name: s3-test ]
With such a setup, we just needed to copy a large file (~1GB .iso) to the test bucket, which killed the container after some seconds. Without the resources.limits config we see that memory consumption is going above the 200Mi mark, which explains the OOM kill when limits are enabled.
We are trying to limit the memory consumption of Minio pods, but of course also not want to see "random" OOM kills of the minio containers. Our intuitive approach was to set the common Kubernetes resource requests and limits for the memory inside the
resources
property of thePool
definition for aTenant
. Now if there's enough load on the cluster (and/or over time) we see theminio
containers of the pods getting OOM killed.Researching a little bit, we found certain issues trying to fiddle with
GOMEMLIMIT
andMINIO_MEMLIMIT
(or--memlimit
parameter) which is not (really) documented in the Minio documentation (another issue to be addressed?).Our last attempt was to set the two environment variables from above which let the
minio
container fail on startup when specifying theMINIO_MEMLIMIT
(either in numeric bytes or with alphanumericMiB
units). The only error logged was, before the container restarted in a loop:We assume that even numeric values of environment variables are always strings.
On the other hand, even if this (manually configuring
MINIO_MEMLIMIT
variable) would solve our actual issue, we rather like to see it more convenient if the pure configuration ofresources.limits.memory
will apply automatically and properly to theminio
process/runtime configuration as well.Related Github issues we found
Expected Behavior
We are not sure how and if it is actually intended to define a memory limit for the MinIO process itself. But setting memory limits in the resources section of a
Pool
definition is leading to OOM kills of the pod container. We'd rather see an automatic adoption of theGOMEMLIMIT
andMINIO_MEMLIMIT
Current Behavior
Possible Solution
see: "Expected Behavior" above.
Steps to Reproduce (for bugs)
This is a minimal sample
Tenant
specification which is not working for us:With such a setup, we just needed to copy a large file (~1GB
.iso
) to thetest
bucket, which killed the container after some seconds. Without theresources.limits
config we see that memory consumption is going above the200Mi
mark, which explains the OOM kill when limits are enabled.Context
no particular context
Regression
No - at least not for us.
Your Environment
minio-operator
:7.0.0
minio
container image:RELEASE.2025-01-18T00-31-37Z
uname -a
): Linux 5.15.0-117-generic on Ubuntu 22.04.4 LTSThe text was updated successfully, but these errors were encountered: