简体   繁体   中英

Wrapping calls to malloc()/realloc()… is this a good idea?

For an assignment, I need to allocate a dynamic buffer, using malloc() for the inital buffer and realloc() to expand that buffer if needed. Everywhere I use (re|m)alloc(), the code looks like the following:

char *buffer = malloc(size);

if (buffer == NULL) {
    perror();
    exit(EXIT_FAILURE);
}

The programm only reads data from a file and outputs it, so I thought just exiting the program when (re|m)alloc fails would be a good idea. Now, the real question is:

Would it be beneficial to wrap the calls, eg like this?

void *Malloc(int size) {
    void *buffer = malloc(size);

    if (buffer == NULL) {
        perror();
        exit(EXIT_FAILURE);
    }

    return buffer;
}

Or is this a bad idea?

It's a bad idea in the form presented, because in anything other than a trivial program written for an assignment, you want to do something more useful/graceful than bailing out. So best not to get into bad habits. This isn't to say that wrappers around allocation are bad per se (centralizing error handling can be a good thing), just that a wrapper that you don't check the return value of (eg, that doesn't return at all on failure) is a bad idea, barring your having provided some mechanism for allowing code to hook into the bail-out logic.

If you do want to do it in the form you've presented, I'd strongly recommend using a more clearly different name than Malloc instead of malloc . Like malloc_or_die . :-)

In most instances, the attempt to use a null pointer will crash the program soon enough anyway, and it will make it easier to debug, since you get a nice core dump to wallow around in, which you don't get if you call exit() .

The only advice I'd give is to dereference the returned pointer as soon as possible after allocating it, even if only gratuitously, so that the core dump can lead you straight to the faulty malloc call.

There is rarely much you can do to recover from memory exhaustion, so exiting is usually the right thing to do. But do it in such a way as to make the post-mortem easier.

Just to be clear, though, memory exhaustion usually occurs long after the OS is crippled by page-swapping activity. This strategy is really only useful to catch ridiculous allocations, like trying to malloc(a_small_negative_number) due to a bug.

In your case it's OK. Just remember to give a message with reasons for premature exit, and it would be nice to specify the line number. Somethink like this:

void* malloc2(int size, int line_num){
    void *buffer = malloc(size);
    if (buffer == NULL) {
        printf("ERROR: cannot alloc for line %d\n", line_num);
        perror();
        exit(EXIT_FAILURE);
        }
    return buffer;
};

#define Malloc(n) malloc2((n), __LINE__)

EDIT: as others mentioned, it's not a good habbit for an experienced programmer, but for a beginner that have difficulties even with keeping track of the program flow in "happy" case it's OK.

Should I bother detecting OOM (out of memory) errors in my C code?

That's my answer to a similar question. In summary, I'm in favor of designing apps so they recover from any kind of crashes, and then treat out-of-memory as a reason to crash.

The ideas that "checking for malloc for failure is useless because of overcommit" or that "the OS will already be crippled by the time malloc fails" are seriously outdated. Robust operating systems have never overcommitted memory, and historically-not-so-robust ones (like Linux) nowadays have easy ways to disable overcommit and protect against the OS becoming crippled due to memory exhaustion - as long as apps do their part not to crash and burn when malloc fails!

There are many reasons malloc can fail on a modern system:

  • Insufficient physical resources to instantiate the memory.
  • Virtual address space exhausted, even when there is plenty of physical memory free. This can happen easily on a 32-bit machine (or 32-bit userspace) with >4gb of ram+swap.
  • Memory fragmentation. If your allocation patterns are very bad, you could end up with 4 million 16-byte chunks spaced evenly 1000 bytes apart, and unable to satisfy a malloc(1024) call.

How you deal with memory exhaustion depends on the nature of your program.

Certainly from a standpoint of the system's health as a whole, it's nice for your program to die. That reduces resource starvation and may allow other apps to keep running. On the other hand, the user will be very upset if that means losing hours of work editing a video, typing a paper, drafting a blog post, coding, etc. Or they could be happy if their mp3 player suddenly dying with out-of-memory means their disk stops thrashing and they're able to get back to their word processor and click "save".

As for OP's original question, I would strongly advise against writing malloc wrappers that die on failure, or writing code that just assumes it will segfault immediately upon using the null pointer if malloc failed. It's an easy bad habit to get into, and once you've written code that's full of unchecked allocations, it will be impossible to later reuse that code in any program where robustness matters.

A much better solution is just to keep returning failure to the calling function, and let the calling function return failure to its calling function, etc., until you get all the way back to main or similar, where you can write if (failure) exit(1); . This way, the code is immediately reusable in other situations where you might actually want to check for errors and take some kind of recovery steps to free up memory, save/dump valuable data to disk, etc.

I think that it is a bad idea, since first of all checking the return of malloc doesn't buy you much on modern systems and second since this gives you false security that when you use such a call, all your allocations are fine.

(I am supposing you are writing for a hosted environment and not embedded, standalone.)

Modern systems with a large virtual address space will just never return (void*)0 from malloc or realloc apart, maybe, if the arguments where bogus. You will encounter problems much, much later when your system starts to swap like crazy or even runs out of swap.

So no, don't check the return of these functions, it makes not much sense. Instead, check the arguments to malloc against 0 (and for realloc if both are 0 simultaneously) with an assertion, since then the problem is not inside malloc or realloc but the way you are calling them.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM