简体   繁体   中英

Why does allocating large chunks of memory fail when reallocing small chunks doesn't

This code results in x pointing to a chunk of memory 100GB in size.

#include <stdlib.h>
#include <stdio.h>

int main() {
    auto x = malloc(1);
    for (int i = 1; i< 1024; ++i) x = realloc(x, i*1024ULL*1024*100);
    while (true); // Give us time to check top
}

While this code fails allocation.

#include <stdlib.h>
#include <stdio.h>

int main() {
    auto x = malloc(1024ULL*1024*100*1024);
    printf("%llu\n", x);
    while (true); // Give us time to check top
}

Well you're allocating less memory in the one that succeeds:

for (int i = 1; i< 1024; ++i) x = realloc(x, i*1024ULL*1024*100);

The last realloc is:

x = realloc(x, 1023 * (1024ULL*1024*100));

As compared to:

auto x = malloc(1024 * (1024ULL*100*1024));

Maybe that's right where your memory boundary is - the last 100M that broke the camel's back?

My guess is, that the memory size of your system is less than the 100 GiB that you are trying to allocate. While Linux does overcommit memory, it still bails out of requests that are way beyond what it can fulfill. That is why the second example fails.

The many small increments of the first example, on the other hand, are way below that threshold. So each one of them succeeds as the kernel knows that you didn't require any of the prior memory yet, so it has no indication that it won't be able to back those 100 additional MiB.

I believe that the threshold for when a memory request from a process fails is relative to the available RAM, and that it can be adjusted (though I don't remember how exactly).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM