简体   繁体   English

编译器超出大数组的堆大小

[英]Compliler out of heap size for large array

So I have a double array of 15,000,000 elements that during runtime, random subsets of 2000 elements from the array need to be extracted for processing.所以我有一个包含 15,000,000 个元素的双数组,在运行时,需要从数组中提取 2000 个元素的随机子集进行处理。

I've tried initialising the array using the following:我尝试使用以下方法初始化数组:

static const double myArray[15000000] = {-2.1232, -6.4243, 23.432, ...};

However during runtime I get the error "C1060 compiler is out of heap space".但是在运行时我收到错误“C1060 编译器堆空间不足”。 In Visual Studio 2019, I've went into the project properties -> linker -> System and modified the Heap Reserve Size to "8000000000" which I assumed would be large enough and I have 16GB on my machine, but I still return the same error.在 Visual Studio 2019 中,我进入了项目属性 -> linker -> 系统并将堆保留大小修改为“8000000000”,我认为它足够大,我的机器上有 16GB,但我仍然返回相同的错误。 I've also tried using the x64 compiler but to no avail.我也尝试过使用 x64 编译器,但无济于事。

I've also tried writing the array to a csv, and then binary file and reading from that instead during runtime.我还尝试将数组写入 csv,然后在运行时读取二进制文件并从中读取。 However, the read process takes far too long, as I'm required to read from it, ideally, several times a second.但是,读取过程花费的时间太长了,因为我需要读取它,理想情况下,每秒读取几次。

I'm relatively new to C++, but especially new when it comes to memory allocation.我对 C++ 比较陌生,但在 memory 分配方面尤其新。 What would you suggest as a solution?您会建议什么作为解决方案?

If you have your 15M doubles in binary format, you can embed that into your binary and reference it.如果你有二进制格式的 15M 双打,你可以将它嵌入到你的二进制文件中并引用它。 The run-time cost is just a bit more disk IO when first loading your binary, but that should be much faster than parsing a CSV.首次加载二进制文件时,运行时成本只是磁盘 IO 多一点,但这应该比解析 CSV 快得多。

the problem may be that you have enough memory but it not a consequence.问题可能是您有足够的 memory 但这不是后果。 so my suggestion is to use std::list所以我的建议是使用 std::list

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM