You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed that memory usage consistently increases over time, until our containers eventually OOM and start over.
We are not generating more metrics over time, as shown, so it seems strange that memory would grow so consistently.
I don't have many details, but my org did observe that 4 different pods started at the same time and crashed at the same time, over a week later. The pods are in different Kubernetes clusters, reading from different Kafka clusters with drastically different throughput. So I don't think this has anything to do with throughput on the __consumer_offsets topic.
The text was updated successfully, but these errors were encountered:
This was due to a bug in the Prometheus client and python 3.7. The client has been patched, so I'll upgrade to the new version, which should fix the issue.
I've noticed that memory usage consistently increases over time, until our containers eventually OOM and start over.
We are not generating more metrics over time, as shown, so it seems strange that memory would grow so consistently.
I don't have many details, but my org did observe that 4 different pods started at the same time and crashed at the same time, over a week later. The pods are in different Kubernetes clusters, reading from different Kafka clusters with drastically different throughput. So I don't think this has anything to do with throughput on the
__consumer_offsets
topic.The text was updated successfully, but these errors were encountered: