简体   繁体   English

Node.js 堆出 memory

[英]Node.js heap out of memory

Today I ran my script for filesystem indexing to refresh RAID files index and after 4h it crashed with following error:今天我运行我的文件系统索引脚本来刷新 RAID 文件索引,4 小时后它崩溃并出现以下错误:

[md5:]  241613/241627 97.5%  
[md5:]  241614/241627 97.5%  
[md5:]  241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)

<--- Last few GCs --->

11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]\: ,\n  >)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/bin/node]
 2: 0xe2c5fc [/usr/bin/node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
 6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
 7: 0x3629ef50961b

Server is equipped with 16gb RAM and 24gb SSD swap.服务器配备 16gb RAM 和 24gb SSD swap。 I highly doubt my script exceeded 36gb of memory. At least it shouldn't我非常怀疑我的脚本超过了 memory 的 36gb。至少它不应该

Script creates index of files stored as Array of Objects with files metadata (modification dates, permissions, etc, no big data)脚本使用文件元数据(修改日期、权限等,无大数据)创建存储为对象数组的文件索引

Here's full script code: http://pastebin.com/mjaD76c3这是完整的脚本代码: http://pastebin.com/mjaD76c3

I've already experiend weird node issues in the past with this script what forced me eg.过去我已经用这个脚本体验过奇怪的节点问题是什么迫使我,例如。 split index into multiple files as node was glitching when working on such big files as String.将索引拆分为多个文件,因为节点在处理像 String 这样的大文件时出现故障。 Is there any way to improve nodejs memory management with huge datasets?有没有什么办法可以改进 nodejs memory 对庞大数据集的管理?

If I remember correctly, there is a strict standard limit for the memory usage in V8 of around 1.7 GB, if you do not increase it manually.如果我没记错的话,如果不手动增加内存使用量,V8 中的内存使用量有一个严格的标准限制,大约为 1.7 GB。

In one of our products we followed this solution in our deploy script:在我们的一个产品中,我们在部署脚本中遵循了这个解决方案:

 node --max-old-space-size=4096 yourFile.js

There would also be a new space command but as I read here: a-tour-of-v8-garbage-collection the new space only collects the newly created short-term data and the old space contains all referenced data structures which should be in your case the best option.还有一个新的空间命令,但正如我在这里读到的: a-tour-of-v8-garbage-collection新空间只收集新创建的短期数据,旧空间包含所有引用的数据结构,这些数据结构应该在你的情况是最好的选择。

If you want to increase the memory usage of the node globally - not only single script, you can export environment variable, like this:如果你想全局增加节点的内存使用——不仅仅是单个脚本,你可以导出环境变量,像这样:
export NODE_OPTIONS=--max_old_space_size=4096

Then you do not need to play with files when running builds like npm run build .然后,在运行npm run build时,您无需处理文件。

Just in case anyone runs into this in an environment where they cannot set node properties directly (in my case a build tool):以防万一有人在无法直接设置节点属性的环境中遇到这种情况(在我的情况下是构建工具):

NODE_OPTIONS="--max-old-space-size=4096" node ...

You can set the node options using an environment variable if you cannot pass them on the command line.如果无法在命令行上传递节点选项,则可以使用环境变量设置节点选项。

Here are some flag values to add some additional info on how to allow more memory when you start up your node server.这里有一些标志值,用于添加一些关于如何在启动节点服务器时允许更多内存的附加信息。

1GB - 8GB 1GB - 8GB

#increase to 1gb
node --max-old-space-size=1024 index.js

#increase to 2gb
node --max-old-space-size=2048 index.js 

#increase to 3gb
node --max-old-space-size=3072 index.js

#increase to 4gb
node --max-old-space-size=4096 index.js

#increase to 5gb
node --max-old-space-size=5120 index.js

#increase to 6gb
node --max-old-space-size=6144 index.js

#increase to 7gb
node --max-old-space-size=7168 index.js

#increase to 8gb 
node --max-old-space-size=8192 index.js 

I just faced same problem with my EC2 instance t2.micro which has 1 GB memory.我的 EC2 实例 t2.micro 遇到了同样的问题,它有 1 GB 内存。

I resolved the problem by creating swap file using this url and set following environment variable.我通过使用url 创建交换文件并设置以下环境变量来解决该问题。

export NODE_OPTIONS=--max_old_space_size=4096

Finally the problem has gone.终于问题解决了。

I hope that would be helpful for future.我希望这对未来有帮助。

I encountered this issue when trying to debug with VSCode, so just wanted to add this is how you can add the argument to your debug setup.我在尝试使用 VSCode 进行调试时遇到了这个问题,所以只想添加这就是如何将参数添加到调试设置中的方法。

You can add it to the runtimeArgs property of your config in launch.json .你可以把它添加到runtimeArgs在你的配置的性能launch.json

See example below.请参阅下面的示例。

{
"version": "0.2.0",
"configurations": [{
        "type": "node",
        "request": "launch",
        "name": "Launch Program",
        "program": "${workspaceRoot}\\server.js"
    },
    {
        "type": "node",
        "request": "launch",
        "name": "Launch Training Script",
        "program": "${workspaceRoot}\\training-script.js",
        "runtimeArgs": [
            "--max-old-space-size=4096"
        ]
    }
]}

i was struggling with this even after setting --max-old-space-size.即使在设置 --max-old-space-size 之后,我也在为此而苦苦挣扎。

Then i realised need to put options --max-old-space-size before the karma script.然后我意识到需要在 karma 脚本之前放置选项 --max-old-space-size 。

also best to specify both syntaxes --max-old-space-size and --max_old_space_size my script for karma :也最好同时指定语法 --max-old-space-size 和 --max_old_space_size 我的 karma 脚本:

node --max-old-space-size=8192 --optimize-for-size --max-executable-size=8192  --max_old_space_size=8192 --optimize_for_size --max_executable_size=8192 node_modules/karma/bin/karma start --single-run --max_new_space_size=8192   --prod --aot

reference https://github.com/angular/angular-cli/issues/1652参考https://github.com/angular/angular-cli/issues/1652

I had a similar issue while doing AOT angular build.我在进行 AOT 角度构建时遇到了类似的问题。 Following commands helped me.以下命令帮助了我。

npm install -g increase-memory-limit
increase-memory-limit

Source: https://geeklearning.io/angular-aot-webpack-memory-trick/来源: https : //geeklearning.io/angular-aot-webpack-memory-trick/

Steps to fix this issue (In Windows) -解决此问题的步骤(在 Windows 中)-

  1. Open command prompt and type %appdata% press enter打开命令提示符并键入%appdata%按 Enter
  2. Navigate to %appdata% > npm folder导航到%appdata% > npm 文件夹
  3. Open or Edit ng.cmd in your favorite editor在您喜欢的编辑器中打开或编辑ng.cmd
  4. Add --max_old_space_size=8192 to the IF and ELSE block--max_old_space_size=8192添加到 IF 和 ELSE 块

Your node.cmd file looks like this after the change:更改后,您的node.cmd文件如下所示:

@IF EXIST "%~dp0\node.exe" (
  "%~dp0\node.exe" "--max_old_space_size=8192" "%~dp0\node_modules\@angular\cli\bin\ng" %*
) ELSE (
  @SETLOCAL
  @SET PATHEXT=%PATHEXT:;.JS;=;%
  node "--max_old_space_size=8192" "%~dp0\node_modules\@angular\cli\bin\ng" %*
)

Today I ran my script for filesystem indexing to refresh RAID files index and after 4h it crashed with following error:今天,我运行了用于文件系统索引编制的脚本,以刷新RAID文件索引,并且4小时后它崩溃并出现以下错误:

[md5:]  241613/241627 97.5%  
[md5:]  241614/241627 97.5%  
[md5:]  241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)

<--- Last few GCs --->

11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]\: ,\n  >)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/bin/node]
 2: 0xe2c5fc [/usr/bin/node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
 6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
 7: 0x3629ef50961b

Server is equipped with 16gb RAM and 24gb SSD swap.服务器配备16GB RAM和24GB SSD交换。 I highly doubt my script exceeded 36gb of memory.我非常怀疑我的脚本是否超过了36gb的内存。 At least it shouldn't至少不应该

Script creates index of files stored as Array of Objects with files metadata (modification dates, permissions, etc, no big data)脚本使用文件元数据(修改日期,权限等,无大数据)创建存储为对象数组的文件的索引

Here's full script code: http://pastebin.com/mjaD76c3这是完整的脚本代码: http : //pastebin.com/mjaD76c3

I've already experiend weird node issues in the past with this script what forced me eg.过去,使用此脚本我已经经历过奇怪的节点问题,例如,这迫使我这样做。 split index into multiple files as node was glitching when working on such big files as String.在处理诸如String之类的大文件时,由于节点出现故障,因此将索引拆分为多个文件。 Is there any way to improve nodejs memory management with huge datasets?有没有办法通过庞大的数据集来改善nodejs的内存管理?

Today I ran my script for filesystem indexing to refresh RAID files index and after 4h it crashed with following error:今天,我运行了用于文件系统索引编制的脚本,以刷新RAID文件索引,并且4小时后它崩溃并出现以下错误:

[md5:]  241613/241627 97.5%  
[md5:]  241614/241627 97.5%  
[md5:]  241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)

<--- Last few GCs --->

11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]\: ,\n  >)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/bin/node]
 2: 0xe2c5fc [/usr/bin/node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
 6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
 7: 0x3629ef50961b

Server is equipped with 16gb RAM and 24gb SSD swap.服务器配备16GB RAM和24GB SSD交换。 I highly doubt my script exceeded 36gb of memory.我非常怀疑我的脚本是否超过了36gb的内存。 At least it shouldn't至少不应该

Script creates index of files stored as Array of Objects with files metadata (modification dates, permissions, etc, no big data)脚本使用文件元数据(修改日期,权限等,无大数据)创建存储为对象数组的文件的索引

Here's full script code: http://pastebin.com/mjaD76c3这是完整的脚本代码: http : //pastebin.com/mjaD76c3

I've already experiend weird node issues in the past with this script what forced me eg.过去,使用此脚本我已经经历过奇怪的节点问题,例如,这迫使我这样做。 split index into multiple files as node was glitching when working on such big files as String.在处理诸如String之类的大文件时,由于节点出现故障,因此将索引拆分为多个文件。 Is there any way to improve nodejs memory management with huge datasets?有没有办法通过庞大的数据集来改善nodejs的内存管理?

I've faced this same problem recently and came across to this thread but my problem was with React App.我最近遇到了同样的问题并遇到了这个线程,但我的问题是React App。 Below changes in the node start command solved my issues. node start 命令中的以下更改解决了我的问题。

Syntax句法

node --max-old-space-size=<size> path-to/fileName.js

Example例子

node --max-old-space-size=16000 scripts/build.js

Why size is 16000 in max-old-space-size?为什么最大旧空间大小中的大小为 16000?

Basically, it varies depends on the allocated memory to that thread and your node settings.基本上,它取决于分配给该线程的内存和您的节点设置。

How to verify and give right size?如何验证并给出正确的尺寸?

This is basically stay in our engine v8 .这基本上停留在我们的引擎v8 below code helps you to understand the Heap Size of your local node v8 engine.以下代码可帮助您了解本地节点 v8 引擎的堆大小。

const v8 = require('v8');
const totalHeapSize = v8.getHeapStatistics().total_available_size;
const totalHeapSizeGb = (totalHeapSize / 1024 / 1024 / 1024).toFixed(2);
console.log('totalHeapSizeGb: ', totalHeapSizeGb);

I just want to add that in some systems, even increasing the node memory limit with --max-old-space-size , it's not enough and there is an OS error like this:我只想在某些系统中添加它,即使使用--max-old-space-size增加节点内存限制,这还不够,并且存在如下操作系统错误:

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

In this case, probably is because you reached the max mmap per process.在这种情况下,可能是因为您达到了每个进程的最大 mmap。

You can check the max_map_count by running您可以通过运行检查 max_map_count

sysctl vm.max_map_count

and increas it by running并通过运行增加它

sysctl -w vm.max_map_count=655300

and fix it to not be reset after a reboot by adding this line并通过添加此行将其修复为在重新启动后不会重置

vm.max_map_count=655300

in /etc/sysctl.conf file./etc/sysctl.conf文件中。

Check here for more info. 在这里查看更多信息。

A good method to analyse the error is by run the process with strace分析错误的一个好方法是使用strace运行进程

strace node --max-old-space-size=128000 my_memory_consuming_process.js

Today I ran my script for filesystem indexing to refresh RAID files index and after 4h it crashed with following error:今天,我运行了用于文件系统索引编制的脚本,以刷新RAID文件索引,并且4小时后它崩溃并出现以下错误:

[md5:]  241613/241627 97.5%  
[md5:]  241614/241627 97.5%  
[md5:]  241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)

<--- Last few GCs --->

11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]\: ,\n  >)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/bin/node]
 2: 0xe2c5fc [/usr/bin/node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
 6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
 7: 0x3629ef50961b

Server is equipped with 16gb RAM and 24gb SSD swap.服务器配备16GB RAM和24GB SSD交换。 I highly doubt my script exceeded 36gb of memory.我非常怀疑我的脚本是否超过了36gb的内存。 At least it shouldn't至少不应该

Script creates index of files stored as Array of Objects with files metadata (modification dates, permissions, etc, no big data)脚本使用文件元数据(修改日期,权限等,无大数据)创建存储为对象数组的文件的索引

Here's full script code: http://pastebin.com/mjaD76c3这是完整的脚本代码: http : //pastebin.com/mjaD76c3

I've already experiend weird node issues in the past with this script what forced me eg.过去,使用此脚本我已经经历过奇怪的节点问题,例如,这迫使我这样做。 split index into multiple files as node was glitching when working on such big files as String.在处理诸如String之类的大文件时,由于节点出现故障,因此将索引拆分为多个文件。 Is there any way to improve nodejs memory management with huge datasets?有没有办法通过庞大的数据集来改善nodejs的内存管理?

if you want to change the memory globally for node (windows) go to advanced system settings -> environment variables -> new user variable如果要全局更改节点(Windows)的内存,请转到高级系统设置 -> 环境变量 -> 新用户变量

variable name = NODE_OPTIONS
variable value = --max-old-space-size=4096

Recently, in one of my project ran into same problem.最近,在我的一个项目中遇到了同样的问题。 Tried couple of things which anyone can try as a debugging to identify the root cause:尝试了一些任何人都可以尝试作为调试来确定根本原因的事情:

  1. As everyone suggested , increase the memory limit in node by adding this command:正如大家所建议的,通过添加以下命令来增加节点中的内存限制:

     { "scripts":{ "server":"node --max-old-space-size={size-value} server/index.js" } }

Here size-value i have defined for my application was 1536 (as my kubernetes pod memory was 2 GB limit , request 1.5 GB)这里我为我的应用程序定义的size-value是 1536(因为我的 kubernetes pod 内存是 2 GB 限制,请求 1.5 GB)

So always define the size-value based on your frontend infrastructure/architecture limit (little lesser than limit)因此,始终根据您的前端基础设施/架构限制(略小于限制)定义size-value

One strict callout here in the above command, use --max-old-space-size after node command not after the filename server/index.js .在上面的命令中有一个严格的标注,在node命令之后而不是在文件名server/index.js之后使用--max-old-space-size

  1. If you have ngnix config file then check following things:如果您有ngnix配置文件,请检查以下内容:

    • worker_connections: 16384 (for heavy frontend applications) [nginx default is 512 connections per worker , which is too low for modern applications] worker_connections: 16384 (对于繁重的前端应用程序)[nginx 默认为每个worker 512连接,这对于现代应用程序来说太低了]

    • use: epoll (efficient method) [nginx supports a variety of connection processing methods]使用: epoll (高效方法)【nginx支持多种连接处理方法】

    • http: add following things to free your worker from getting busy in handling some unwanted task. http:添加以下内容,让您的员工免于忙于处理一些不需要的任务。 ( client_body_timeout , reset_timeout_connection , client_header_timeout,keepalive_timeout ,send_timeout ). client_body_timeout , reset_timeout_connection , client_header_timeout ,keepalive_timeout ,send_timeout )。

  2. Remove all logging/tracking tools like APM , Kafka , UTM tracking, Prerender (SEO) etc middlewares or turn off.删除所有日志/跟踪工具,如APM , Kafka , UTM tracking, Prerender (SEO) 等中间件或关闭。

  3. Now code level debugging : In your main server file , remove unwanted console.log which is just printing a message.现在代码级调试:在您的主server文件中,删除不需要的console.log ,它只是打印一条消息。

  4. Now check for every server route ie app.get() , app.post() ... below scenarios:现在检查每个服务器路由,即app.get() , app.post() ...以下场景:

  • data => if(data) res.send(data) // do you really need to wait for data or that api returns something in response which i have to wait for?? data => if(data) res.send(data) // 你真的需要等待数据还是 api 返回一些我必须等待的响应? , If not then modify like this: , 如果不是那么修改如下:
data => res.send(data) // this will not block your thread, apply everywhere where it's needed
  • else part: if there is no error coming then simply return res.send({}) , NO console.log here .其他部分:如果没有出现错误,则只需return res.send({})console.log here没有console.log here

  • error part: some people define as error or err which creates confusion and mistakes.错误部分:有些人定义为errorerr ,这会造成混乱和错误。 like this:像这样:

     `error => { next(err) } // here err is undefined` `err => {next(error) } // here error is undefined` `app.get(API , (re,res) =>{ error => next(error) // here next is not defined })`
  • remove winston , elastic-epm-node other unused libraries using npx depcheck command.使用npx depcheck命令删除winstonelastic-epm-node npx depcheck elastic-epm-node其他未使用的库。

  • In the axios service file , check the methods and logging properly or not like :在 axios 服务文件中,检查方法和日志是否正确,例如:

     if(successCB) console.log("success") successCB(response.data) // here it's wrong statement, because on success you are just logging and then `successCB` sending outside the if block which return in failure case also.
  • Save yourself from using stringify , parse etc on accessive large dataset.避免在可访问的大型数据集上使用stringify , parse等。 (which i can see in your above shown logs too. (我也可以在上面显示的日志中看到。

  1. Last but not least , for every time when your application crashes or pods restarted check the logs.最后但并非最不重要的一点是,每次当您的应用程序崩溃或 Pod 重新启动时,请检查日志。 In log specifically look for this section: Security context This will give you why , where and who is the culprit behind the crash.在日志中专门查找此部分: Security context这将告诉您为什么、在何处以及谁是崩溃背后的罪魁祸首

For other beginners like me, who didn't find any suitable solution for this error, check the node version installed (x32, x64, x86).对于像我这样的初学者,没有找到任何适合此错误的解决方案,请检查安装的节点版本(x32、x64、x86)。 I have a 64-bit CPU and I've installed x86 node version, which caused the CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory error.我有一个 64 位 CPU 并且我安装了 x86 节点版本,这导致CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory不足错误。

您还可以使用以下命令更改 Window 的环境变量:

 $env:NODE_OPTIONS="--max-old-space-size=8192"

This command works perfectly.此命令完美运行。 I have 8GB ram in my laptop, So I set size=8192.我的笔记本电脑中有 8GB 内存,所以我设置了 size=8192。 It is all about ram and also you need set file name.这完全是关于ram,您还需要设置文件名。 I run npm run build command that's why I used build.js .我运行npm run build命令,这就是我使用build.js的原因。

node --expose-gc --max-old-space-size=8192 node_modules/react-scripts/scripts/build.js

就我而言,我将 node.js 版本升级到最新版本(版本 12.8.0),它的效果非常好。

I will mention 2 types of solution.我将提到两种类型的解决方案。

My solution : In my case I add this to my environment variables :我的解决方案:就我而言,我将其添加到我的环境变量中:

export NODE_OPTIONS=--max_old_space_size=20480

But even if I restart my computer it still does not work.但是即使我重新启动计算机,它仍然无法正常工作。 My project folder is in d:\\ disk.我的项目文件夹在 d:\\ 磁盘中。 So I remove my project to c:\\ disk and it worked.所以我将我的项目删除到 c:\\ 磁盘并且它起作用了。

My team mate's solution : package.json configuration is worked also.我的队友的解决方案:package.json 配置也有效。

"start": "rimraf ./build && react-scripts --expose-gc --max_old_space_size=4096 start",

--max-old-space-size

Unix (Mac OS) Unix (Mac OS)

  1. Open a terminal and open our .zshrc file using nano like so (this will create one, if one doesn't exist) :打开一个终端并使用nano像这样打开我们的.zshrc文件(如果不存在,这将创建一个)

    nano ~/.zshrc

  2. Update our NODE_OPTIONS environment variable by adding the following line into our currently open .zshrc file:通过将以下行添加到我们当前打开的.zshrc文件中来更新我们的NODE_OPTIONS环境变量:

    export NODE_OPTIONS=--max-old-space-size=8192 # increase node memory limit

Please note that we can set the number of megabytes passed in to whatever we like, provided our system has enough memory (here we are passing in 8192 megabytes which is roughly 8 GB) .请注意,如果我们的系统有足够的 memory (这里我们传入 8192 兆字节,大约是 8 GB) ,我们可以将传入的兆字节数设置为任何我们喜欢的值。

  1. Save and exit nano by pressing: ctrl + x , then y to agree and finally enter to save the changes.按保存并退出 nano: ctrl + x ,然后y同意,最后enter保存更改。

  2. Close and reopen the terminal to make sure our changes have been recognised.关闭重新打开终端以确保我们的更改已被识别。

  3. We can print out the contents of our .zshrc file to see if our changes were saved like so: cat ~/.zshrc .我们可以打印.zshrc文件的内容,看看我们的更改是否像这样保存: cat ~/.zshrc

Linux (Ubuntu) Linux (Ubuntu)

  1. Open a terminal and open the .bashrc file using nano like so:打开终端并使用nano打开.bashrc文件,如下所示:

    nano ~/.bashrc

The remaining steps are similar with the Mac steps from above, except we would most likely be using ~/.bashrc by default ( as opposed to ~/.zshrc) .其余步骤与上面的Mac步骤类似,除了我们很可能默认使用~/.bashrc而不是 ~/.zshrc) So these values would need to be substituted!所以这些值需要被替换!

Link to Nodejs Docs 链接到 Nodejs 文档

Upgrade node to the latest version.将节点升级到最新版本。 I was on node 6.6 with this error and upgraded to 8.9.4 and the problem went away.我在节点 6.6 上出现此错误并升级到 8.9.4,问题消失了。

以防万一它可以帮助人们在使用产生大量日志记录的 nodejs 应用程序时遇到此问题,一位同事通过将标准输出管道传输到文件来解决此问题。

If you are trying to launch not node itself, but some other soft, for example webpack you can use the environment variable and cross-env package:如果您尝试启动的不是node本身,而是其他一些软件,例如webpack您可以使用环境变量和cross-env包:

$ cross-env NODE_OPTIONS='--max-old-space-size=4096' \
  webpack --progress --config build/webpack.config.dev.js

For angular project bundling, I've added the below line to my pakage.json file in the scripts section.对于 angular 项目捆绑,我已将以下行添加到脚本部分的pakage.json文件中。

"build-prod": "node --max_old_space_size=5120 ./node_modules/@angular/cli/bin/ng build --prod --base-href /"

Now, to bundle my code, I use npm run build-prod instead of ng build --requiredFlagsHere现在,为了捆绑我的代码,我使用npm run build-prod而不是ng build --requiredFlagsHere

hope this helps!希望这可以帮助!

For Angular , this is how I fixed对于Angular ,这就是我修复的方式

In Package.json , inside script tag add thisPackage.json ,在script标签中添加这个

"scripts": {
  "build-prod": "node --max_old_space_size=5048 ./node_modules/@angular/cli/bin/ng build --prod",
},

Now in terminal/cmd instead of using ng build --prod just use现在在terminal/cmd而不是使用ng build --prod只需使用

npm run build-prod

If you want to use this configuration for build only just remove --prod from all the 3 places如果您只想将此配置用于build只需从所有 3 个地方删除--prod

Use the option --optimize-for-size .使用选项--optimize-for-size It's going to focus on using less ram.它将专注于使用更少的内存。

I experienced the same problem today.我今天遇到了同样的问题。 The problem for me was, I was trying to import lot of data to the database in my NextJS project.我的问题是,我试图在我的NextJS项目中将大量数据导入数据库。

So what I did is, I installed win-node-env package like this:所以我所做的是,我像这样安装了win-node-env package:

yarn add win-node-env

Because my development machine was Windows .因为我的开发机器是Windows I installed it locally than globally.我在本地安装它而不是在全球安装它。 You can install it globally also like this: yarn global add win-node-env你也可以像这样全局安装它: yarn global add win-node-env

And then in the package.json file of my NextJS project, I added another startup script like this:然后在我的 NextJS 项目的package.json文件中,我添加了另一个启动脚本,如下所示:

"dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev"

Here, am passing the node option, ie.在这里,我传递了节点选项,即。 setting 8GB as the limit.将 8GB 设置为限制。 So my package.json file somewhat looks like this:所以我的package.json文件看起来像这样:

{
  "name": "my_project_name_here",
  "version": "1.0.0",
  "private": true,
  "scripts": {
    "dev": "next dev",
    "dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev",
    "build": "next build",
    "lint": "next lint"
  },
  ......
}

And then I run it like this:然后我像这样运行它:

yarn dev_more_mem

For me, I was facing the issue only on my development machine (because I was doing the importing of large data).对我来说,我只在我的开发机器上遇到了这个问题(因为我正在导入大数据)。 Hence this solution.因此这个解决方案。 Thought to share this as it might come in handy for others.想分享这个,因为它可能对其他人有用。

In my case I had ran npm install on previous version of node, after some day I upgraded node version and ram npm install for few modules.就我而言,我在以前版本的节点上运行了npm install ,有一天我升级了节点版本并为几个模块npm install了 ram npm install After this I was getting this error.在此之后,我收到此错误。 To fix this problem I deleted node_module folder from each project and ran npm install again.为了解决这个问题,我从每个项目中删除了 node_module 文件夹,然后再次运行npm install

Hope this might fix the problem.希望这可以解决问题。

Note : This was happening on my local machine and it got fixed on local machine only.注意:这发生在我的本地机器上,仅在本地机器上得到修复。

If any of the given answers are not working for you, check your installed node if it compatible (ie 32bit or 64bit) to your system.如果任何给定的答案对您不起作用,请检查您安装的节点是否与您的系统兼容(即 32 位或 64 位)。 Usually this type of error occurs because of incompatible node and OS versions and terminal/system will not tell you about that but will keep you giving out of memory error.通常发生这种类型的错误是因为节点和操作系统版本不兼容,终端/系统不会告诉你这件事,但会让你给出内存不足的错误。

运行此命令 ng build --configuration

Check that you did not install the 32-bit version of node on a 64-bit machine.检查您没有在 64 位机器上安装 32 位版本的 node。 If you are running node on a 64-bit or 32-bit machine then the nodejs folder should be located in Program Files and Program Files (x86) respectively.如果您在 64 位或 32 位机器上运行 node,则 nodejs 文件夹应分别位于 Program Files 和 Program Files (x86) 中。

我在 AWS Elastic Beanstalk 上遇到此错误,将实例类型从 t3.micro(免费层)升级到 t3.small 修复了该错误

If you use a server with less memory (512MB) you need set try two things: a 1 - Set less memory on react-scripts如果您使用内存较少的服务器(512MB),您需要设置尝试两件事:a 1 - 在 react-scripts 上设置较少的内存

"build": "react-scripts --max_old_space_size=256 build"

2 - You can disable generation of source maps as described in https://create-react-app.dev/docs/advanced-configuration put in .env file: 2 - 您可以禁用源映射的生成,如https://create-react-app.dev/docs/advanced-configuration中所述,放入 .env 文件:

GENERATE_SOURCEMAP=false

I had the same issue in a windows machine and I noticed that for some reason it didn't work in git bash, but it was working in power shell I had the same issue in a windows machine and I noticed that for some reason it didn't work in git bash, but it was working in power shell

None of all these every single answers worked for me (I didn't try to update npm tho).所有这些每一个答案都不适合我(我没有尝试更新 npm tho)。

Here's what worked: My program was using two arrays.这是有效的方法:我的程序使用了两个数组。 One that was parsed on JSON, the other that was generated from datas on the first one.一个是在 JSON 上解析的,另一个是从第一个上的数据生成的。 Just before the second loop, I just had to set my first JSON parsed array back to [] .就在第二个循环之前,我只需要将我的第一个 JSON 解析数组设置回[]

That way a loooooot of memory is freed, allowing the program to continue execution without failing memory allocation at some point.这样就释放了 loooooot 内存,允许程序继续执行而不会在某些时候失败内存分配。

Cheers !干杯!

You can fix a "heap out of memory" error in Node.js by below approaches.您可以通过以下方法修复 Node.js 中的“堆内存不足”错误。

  1. Increase the amount of memory allocated to the Node.js process by using the --max-old-space-size flag when starting the application.通过在启动应用程序时使用 --max-old-space-size 标志来增加分配给 Node.js 进程的 memory 的数量。 For example, you can increase the limit to 4GB by running node --max-old-space-size=4096 index.js.例如,您可以通过运行 node --max-old-space-size=4096 index.js 将限制增加到 4GB。

  2. Use a memory leak detection tool, such as the Node.js heap dump module, to identify and fix memory leaks in your application.使用 memory 泄漏检测工具,例如 Node.js 堆转储模块,识别并修复应用程序中的 memory 泄漏。 You can also use the node inspector and use chrome://inspect to check memory usage.您还可以使用节点检查器并使用 chrome://inspect 检查 memory 的使用情况。

  3. Optimize your code to reduce the amount of memory needed.优化您的代码以减少所需的 memory 的数量。 This might involve reducing the size of data structures, reusing objects instead of creating new ones, or using more efficient algorithms.这可能涉及减少数据结构的大小、重用对象而不是创建新对象或使用更高效的算法。

  4. Use a garbage collector (GC) algorithm to manage memory automatically.使用垃圾收集器 (GC) 算法自动管理 memory。 Node.js uses the V8 engine's garbage collector by default, but you can also use other GC algorithms such as the Garbage Collection in Node.js Node.js默认使用V8引擎的垃圾收集器,但是你也可以使用其他GC算法比如Node.js中的Garbage Collection

  5. Use a containerization technology like Docker which limits the amount of memory available to the container.使用像 Docker 这样的容器化技术,它限制了容器可用的 memory 的数量。

  6. Use a process manager like pm2 which allows to automatically restart the node application if it goes out of memory.使用像 pm2 这样的进程管理器,它允许在节点应用程序超出 memory 时自动重启它。

对于Windows,在项目目录中打开powershell或cmd,然后在命令中键入以下命令(对于该目录中的mac open终端),只需将堆内存从1-8 GB增加即可

node --max-old-space-size={size in MBs} index.js

Today I ran my script for filesystem indexing to refresh RAID files index and after 4h it crashed with following error:今天,我运行了用于文件系统索引编制的脚本,以刷新RAID文件索引,并且4小时后它崩溃并出现以下错误:

[md5:]  241613/241627 97.5%  
[md5:]  241614/241627 97.5%  
[md5:]  241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)

<--- Last few GCs --->

11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]\: ,\n  >)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/bin/node]
 2: 0xe2c5fc [/usr/bin/node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
 6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
 7: 0x3629ef50961b

Server is equipped with 16gb RAM and 24gb SSD swap.服务器配备16GB RAM和24GB SSD交换。 I highly doubt my script exceeded 36gb of memory.我非常怀疑我的脚本是否超过了36gb的内存。 At least it shouldn't至少不应该

Script creates index of files stored as Array of Objects with files metadata (modification dates, permissions, etc, no big data)脚本使用文件元数据(修改日期,权限等,无大数据)创建存储为对象数组的文件的索引

Here's full script code: http://pastebin.com/mjaD76c3这是完整的脚本代码: http : //pastebin.com/mjaD76c3

I've already experiend weird node issues in the past with this script what forced me eg.过去,使用此脚本我已经经历过奇怪的节点问题,例如,这迫使我这样做。 split index into multiple files as node was glitching when working on such big files as String.在处理诸如String之类的大文件时,由于节点出现故障,因此将索引拆分为多个文件。 Is there any way to improve nodejs memory management with huge datasets?有没有办法通过庞大的数据集来改善nodejs的内存管理?

If you have limited memory or RAM, then go for the following command.如果您的内存或 RAM 有限,请执行以下命令。

ng serve --source-map=false ng serve --source-map=false

It will be able to launch the application.它将能够启动应用程序。 For my example, it needs 16gb RAM.对于我的示例,它需要 16GB RAM。 But I can run with 8gb RAM.但是我可以使用 8GB RAM 运行。

In tsconfig.在 tsconfig. Json file I changed target :"es05" to target :"es2020" It worked for me. Json 文件我将 target :"es05" 更改为 target :"es2020" 它对我有用。

I've been experiencing this same issue on a hardhat project for weeks now, any ideas how i would go about setting the node options in this case.几个星期以来,我一直在安全帽项目中遇到同样的问题,任何想法在这种情况下我将如何设置节点选项。 I'd appreciate any help, thanks.我很感激任何帮助,谢谢。

Compiling 72 files with 0.7.0
contracts/libraries/ERC20.sol: Warning: SPDX license identifier not provided in source file. Before publishing, consider adding a comment containing "SPDX-License-Identifier: <SPDX-License>" to each source file. Use "SPDX-License-Identifier: UNLICENSED" for non-open-source code. 
Please see https://spdx.org for more information.

contracts/libraries/ERC1155/EnumerableSet.sol:158:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name    
    function add(Bytes32Set storage set, bytes32 value) internal returns (bool) {
    ^ (Relevant source part starts here and spans across multiple lines).

contracts/libraries/ERC1155/EnumerableSet.sol:224:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name    
    function add(AddressSet storage set, address value) internal returns (bool) {
    ^ (Relevant source part starts here and spans across multiple lines).

Compiling 1 file with 0.8.0

<--- Last few GCs --->

[8432:042B0460]   263058 ms: Mark-sweep (reduce) 349.8 (356.3) -> 248.2 (262.4) MB, 434.4 / 0.2 ms  (+ 70.9 ms in 3 steps since start of marking, biggest step 69.6 ms, walltime since start of marking 800 ms) (average mu = 0.989, current mu = 0.990) memory[8432:042B0460]   263627 
ms: Mark-sweep (reduce) 248.2 (259.4) -> 248.2 (252.1) MB, 555.5 / 0.0 ms  (+ 0.0 ms in 0 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 556 ms) (average mu = 0.969, current 
mu = 0.023) memory p

<--- JS stacktrace --->

FATAL ERROR: NewNativeModule Allocation failed - process out of memory`

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Node.js 在构建时内存不足 - Node.js Heap out of Memory at build time 为什么`(console.error = console.trace)();`在Node.js中使`heap out of memory`失效? - Why does `(console.error = console.trace)();` make `heap out of memory` in Node.js? docker 容器内的 node.js 应用程序中的 memory 频繁堆出 - Frequent Heap Out of memory in node.js application inside a docker container AWS Node.js 无效标记压缩接近堆限制分配失败 - JavaScript 堆内存不足 - AWS Node.js Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 使用 node.js 中的 createWriteStream 创建大文件时 JavaScript 堆内存不足 致命错误:达到堆限制分配失败 - JavaScript heap out of memory while creating a huge file using createWriteStream in node.js FATAL ERROR: Reached heap limit Allocation failed 节点堆内存不足 - Node heap out of memory Javascript 在运行 js 脚本以每分钟从 api 获取数据时从 memory 堆出 - javascript/node.js - Javascript heap out of memory while running a js script to fetch data from an api every minute- javascript/node.js 单个对象的Node.js堆内存限制 - Node.js heap memory limit for single object Node.js:“进程内存不足” - Node.js: “process run out of memory” Node.js进程内存不足 - Node.js process out of memory
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM