Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Memory Leak #18

Closed
jutley opened this issue Sep 19, 2018 · 2 comments
Closed

Memory Leak #18

jutley opened this issue Sep 19, 2018 · 2 comments
Labels
Milestone

Comments

@jutley
Copy link
Contributor

jutley commented Sep 19, 2018

I've noticed that memory usage consistently increases over time, until our containers eventually OOM and start over.

We are not generating more metrics over time, as shown, so it seems strange that memory would grow so consistently.

I don't have many details, but my org did observe that 4 different pods started at the same time and crashed at the same time, over a week later. The pods are in different Kubernetes clusters, reading from different Kafka clusters with drastically different throughput. So I don't think this has anything to do with throughput on the __consumer_offsets topic.

image

@braedon
Copy link
Owner

braedon commented Mar 30, 2019

This was due to a bug in the Prometheus client and python 3.7. The client has been patched, so I'll upgrade to the new version, which should fix the issue.

@braedon
Copy link
Owner

braedon commented Mar 31, 2019

@jutley fixed in 0.5.0

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants