简体   繁体   中英

Does node's max_old_space_size affect the child processs's memory limits?

I'm using node to manage the communication layer between services and a long running java process. This java process is a jar that is run using ChildProcess.spawn()

I'm setting up stdio, close, stderr listeners to monitor the progress of the child process and save output throughout. I expected the child process to run in its own memory space, with its own memory limits (at it is a standalone process).

However, through testing I have determined that the process runs significantly longer before seeing memory issues when I increase the max_old_space_size of the node process. It seems like the memory allocated through the execution of the java process is counted against the max allocation of the parent process. Is this the case?

The answer is no. The only time these memory limits would be passed on would be if you are spawning a node process via fork() , which defaults its execArgv option to process.execArgv (which includes V8 flags of the current process), or if you spawn a child process in such a way that explicitly limits memory usage (eg Java's own resource limiting flags or via a command that spawns the actual child process with restricted resources). Node will not implicitly execute ulimit or any other such commands on your behalf.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM