Skip to content

doc: Worker and heap usage #30173

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
ronag opened this issue Oct 30, 2019 · 5 comments
Closed

doc: Worker and heap usage #30173

ronag opened this issue Oct 30, 2019 · 5 comments

Comments

@ronag
Copy link
Member

ronag commented Oct 30, 2019

I would like to request some clarification on the docs in regards to workers and heap usage in particular when setting memory limits.

e.g. we have a Node service running in a docker container with a memory limit of 1G. We set the --max-old-space-size to =768. However, the node service will spawn 16 worker threads. Will these threads share old space size with the main process or do they all have their own heaps with e.g. 768M which gives a global heap size of 17 * 768 = 13 056M and getting itself killed by the OOM killer.

Also how do the rules apply to child processes and inheritance of old-space-size settings?

@sam-github
Copy link
Contributor

sam-github commented Oct 30, 2019

  • node workers: no idea
  • cluster workers: arg is passed to the child processes, all the processes will (possibly) use the full max space simultaneously
  • child_process children: arg will not be passed unless the code spawning the child does so explicitly, all the processes will (possibly) use the full max space (as they may or may not have received on their command line) simultaneously

EDIT: this may be a good reason to use NODE_OPTIONS to set the max space, it is inherited by all child processes automatically (unless they modify the env), but only node children will actually pay attention to it

@addaleax
Copy link
Member

@ronag worker_threads workers will have their own independent heaps. I’ll try to pick up work on #26628 again this week, fwiw.

@ronag
Copy link
Member Author

ronag commented Oct 30, 2019

@addaleax: Cool, we'll wait for that before switching to threads from child processes.

@gireeshpunathil
Copy link
Member

17 * 768 = 13 056M and getting itself killed by the OOM killer.

@ronag - My colleague and I have experimented a bit on this topic and have some findings captured here - not necessarily on the choice between threads and processes, but the perceived and actual relationship between heap size and the OOM killer function. Hope this helps you in some way:

https://medium.com/the-node-js-collection/node-js-memory-management-in-container-environments-7eb8409a74e8

@gireeshpunathil
Copy link
Member

closing this as #26628 is landed and with the clarification. Please re-open if there is anything outstanding on this

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants