Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

set MALLOC_ARENA_MAX to a lower dynamic value to prevent OOM #143

Merged
merged 2 commits into from
Feb 22, 2018

Conversation

jtwaleson
Copy link
Contributor

@jtwaleson jtwaleson commented Feb 21, 2018

The default behavior is 8x the number of detected CPUs . As Cloud
Foundry typically uses large host machines with smaller containers,
and the Java process is unaware of the difference in allocated CPUs,
the numbers are way off. This often leads to high native memory usage,
followed by a cgroup OOM killer event.

We go with Heroku's recommendation of lowering to a
setting of 2 for small instances. We also grown the setting linearly
with memory to be more in line with the default setting in Mendix
Cloud v3.

References:

The default behavior is 8x the number of detected CPUs . As Cloud
Foundry typically uses large host machines with smaller containers,
and the Java process is unaware of the difference in allocated CPUs,
the numbers are way off. This often leads to high native memory usage,
followed by a cgroup OOM killer event.

We go with Heroku's recommendation of lowering to a
setting of 2 for small instances. We also grown the setting linearly
with memory to be more in line with the default setting in Mendix
Cloud v3.

References:
- cloudfoundry/java-buildpack#163
- https://devcenter.heroku.com/articles/testing-cedar-14-memory-use
- cloudfoundry/java-buildpack#320
Copy link
Contributor

@hansthen hansthen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems wrong to make this value memory dependent. It should be dependent on the number of cores available for your process (give each core's threads their own set of memory pools).

I cannot see the "why" to make this memory dependent. ("This is how we do it in V3" does not count.) Btw, I could live with a static value of 2 or 4, especially since we can tweak it at runtime.

@jtwaleson
Copy link
Contributor Author

@hansthen in CF, the amount of CPU shares (not cores) scales linearly with the amount of memory. See also CLD-2455. I would also be fine with hard-coding to 2 for now.

@knorrie
Copy link
Member

knorrie commented Feb 21, 2018

Since we don't have any real life measurements of actual usage of these memory areas (as opposed to allocation of them) and/or malloc/free and lock contention rates in hosted applications, it makes sense to do anything to at least take the first step to lower the limits to sensible levels that correspond to cpu/memory sizing of the container limits, compared with running the same in a normal OS.

@djvdorp
Copy link
Contributor

djvdorp commented Feb 22, 2018

What @knorrie said, +1 that.

@jtwaleson jtwaleson dismissed hansthen’s stale review February 22, 2018 17:17

discussed offline, we're good with the current proposal, but as the change to 2 might be too limited for some apps, we'll use the dynamic calculation.

@jtwaleson
Copy link
Contributor Author

discussed offline, we agreed on the current calculation

@jtwaleson jtwaleson merged commit 6deb1e8 into master Feb 22, 2018
@jtwaleson jtwaleson deleted the malloc branch February 22, 2018 20:08
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants