简体   繁体   中英

When memory deallocation is strictly required in C and C++?

I have found the following references to C and C++ standards in StackOverflow ( Memory Allocation/Deallocation? ), in relationship to memory deallocation:

C++ Language:

"If the argument given to a deallocation function in the standard library is a pointer that is not the null pointer value (4.10), the deallocation function shall deallocate the storage referenced by the pointer, rendering invalid all pointers referring to any part of the deallocated storage ". [Bold is mine].

C Language:

The free function causes the space pointed to by ptr to be deallocated, that is, made available for further allocation . [Bold is mine].

So, let's suppose a scenario like the following one:

You have a linked list, in a demo app. After creating and linking your nodes, searching, sorting, and so forth, your app is finishing successfully, with a beautiful "return 0".

Which is the problem if you have not deallocated any node since all the pointers you have created have already been destroyed?

Please, I want to clearly distinguish between:

  • what is really needed ("If you do not deallocate you have a leak of memory because of....);

  • what is a good practice, but not strictly required.

Finally: intentionally, I have avoided mentioning smart pointers. Because, if your answer is "deallocating it is a good practice ( = not strictly required, no memory leak), because in a real life scenario you will need to deallocate, etc.", then I can conclude:

  • If I am developing a demo app, I do not need either to use a smart pointer (if I am in C++).

  • If I am in C, I do not need to deallocate, because while arriving at app end of scope, every pointer will be deleted.

Exception: if in my linked list, I have a function to delete nodes, then I understand I need to deallocate, because of memory leak.

Any advice, correction, clarification, distinction from your side will be very much appreciated!


Edit: Thanks to all for your quick answers. Specially @Pablo Esteban Camacho.

This is a topic where two answers are required because C and C++ follow completely different philosophies when it comes to resouce management.

C

When using malloc/free in C the only affected resource is memory. That leads to what other answers already brought up: You may be tempted to not free memory at the end of the program because the OS will reclaim all the process' memory anyway. Since I don't program in CI can't say if and when that may be justified.

C++

C++ is different. There is no excuse for not destroying your objects. C++ ties acquiring and releasing memory to general initialization and cleanup. When you create an object its constructor runs, and when you destroy it its destructor runs. That's true both for stack-allocated objects as well a free-store allocated ones ( new and delete ). If you don't delete then the destructor does not run as well, which means essential actions like closing database or network connections, flushing files to disk, etc. may not happen.

In C++ never think of “memory management”, always think of “resource management”. Memory is just one among many types of resources.

Then again, in the C++ universe the whole question feels a bit strange. It shouldn't even come up because if you follow best practices you use C++'s automatic resource management: either by creating objects on the stack directly or by using resource management wrappers[1]. If you catch yourself writing a naked new – and hopefully a corresponding delete – you should have as solid a justification for it as when writing a goto .

[1] The smart pointers std::unique_ptr and std::shared_ptr are the obvious resource managers. But there are many, for example std::vector . Granted, it does a lot more, but one of its jobs is taking care of the piece of heap memory where the vector's items are stored.

Other than what has already been properly answered, there are few points I would like to add for better clarification.

  1. I would like to refer you to our C++ Standard Website ( https://isocpp.org ), where you will be able to find the most authorized answers. Once you get familiar with our most important authors, you will feel yourself more confident to trust on the received answers.

  2. That said, I would like to invite you to read carefully the C++ Core Guidelines , a document which has been announced by Dr. Bjarne Stroustrup (C++ creator) in 2015, and which receives permanently updates, principally both from Dr. Bjarne Stroustrup and Herb Sutter ("a prominent C++ expert" and, currently, the head of C++ ISO Committee): http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines . Other important contributors provide the result of their researches as well. You will find that the document as been updated few days back (July 31st 2017).

  3. Particularly, going to your questions, I have found you turned back several times over one of them whose answer has been missed: "what about smart pointers?" . In the mentioned document, you will find that smart pointers perform a customized and limited garbage collection which effectively release resources . According to your questions, I would suggest to deeply review:

    a) http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Rr-raii (this C++ rule, RAII, is a key point to deeper understanding C++ philosophy , and will provide "light" to your mind).

    b) http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-resource

    c) http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Rr-mallocfree

    d) http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-cpl

  4. Finally, in case you want to go beyond (to the discussions), I would recommend you to google about "smart pointers are not so smart" (quoting by memory), written by Dr. Scott Meyers, other of our most important writers. [Note: my quotation comes from a Dr. Meyers' book, it is not the title of an article].

Hope this helps.

Clearing the memory before your program exists is a good practice but it is not strictly required since all the memory is freed once a regular program ends. At least this is the case for current operating systems.

But, programs tends to evolve in time. So, maybe 6 months later, you decided to use "that already existing linked list implementation" in another project. Or maybe you will use it in a shared DLL which will stay loaded in the memory as long as the OS is running. Or maybe you extend your demo and it will run for a while and you are limited on memory.

There are many possibilities that something that is not recommended but "works" today goes haywire tomorrow. Best practices are recommended for a reason.

But to be clear, you are not required to take care of freeing your pointers in 1 shot applications.

If you're running on an operating system that frees memory for the program on exit (which would be pretty much everything), I would say that freeing memory before exit is not optimal but might be a good practice.

It might be a good practice because things change over time and you might need to have the ability to not exit and properly free things. So from good engineering point of view you might want to free memory.

It is not optimal because the operating system is magnitudes better at freeing your memory in bulk than your program. You walking a linked list and freeing one element at a time will bring each of those elements into cache/tlb just to throw them away and in the worst case you might even need to swap them in. Two decades ago I saw research showing that common implementations of in-line malloc boundary tags could make the process of manually freeing memory on exit 5-6 magnitudes slower in real applications (this was with swap which might be much less common today; also I don't remember the actual number, this is a conservative guess, the actual number could have been much slower, it was minutes vs. milliseconds). Furthermore, with most malloc implementations freeing doesn't do anything from the point of view of the operating system anyway, the operating system still has to go through all effort to actually properly free the memory.

  • what is really needed ("If you do not deallocate you have a leak of memory because of....);

You have to deallocate every ressource you no longer need, even in the middle of a run. You sometimes need temporary dynamic allocated memory, then deallocate it as soon as your logic says it will not be used in the future.

  • what is a good practice, but not strictly required.

Good practice, is what I said: "always deallocate what you no longer need". You can sometimes defer the deallocation for some good reasons (for example it may be more important to finish some other tasks than to deallocate memory at a given instant). On most OS all memory used by a process is automatically released, but this is not a requirement!

  • If I am developing a demo app, I do not need either to use a smart pointer (if I am in C++).

On the contrary, always prefer using smart pointers, because if you use them correctly then deallocation will take place at good places!

  • If I am in C, I do not need to deallocate, because while arriving at app end of scope, every pointer will be deleted.

No, that is not a good practice, deallocate as soon as possible.

Please, I want to clearly distinguish between:

  • what is really needed ("If you do not deallocate you have a leak of memory because of....);
  • what is a good practice, but not strictly required.

What is "really needed" is that the total memory used by your program needs to remain within limits of what is available to your program. Continually allocating memory and never deallocating it means the amount of memory consumed by your program keeps increasing for as long as it is running. If the program uses more memory than is available to it, then subsequent allocations may well fail, and the program will probably not be able to function as intended (eg an algorithm that relies on being able to use a buffer cannot run correctly if the buffer cannot be allocated).

As a simple example the loop

  while (1)
  {
       do_something();
  }

may well fail in ugly ways if do_something() allocates memory and never releases it.

There is, however, nothing (short of work ethic, or caring about avoiding grumpiness of employers or customers) that absolutely forces a programmer to deallocate any dynamically allocated memory. There is, practically, no absolute need for dynamically allocated memory to be deallocated if;

  • The programmer simply does not care about the consequences (eg complaints by users) of a program running out of memory, and needing to be terminated or reset; OR
  • It is somehow known that, although the program dynamically allocates memory, it will never allocate more than needed; AND
  • It is known that the operating system will clean up properly as a program terminates.

However, for programmers who care about their users, or are using dynamic memory allocation because they DO NOT KNOW how much memory their program needs, or are targeting an OS that does not properly clean up after programs terminated, then deallocating memory is certainly advisable.

Good practice is normally to systematically ensure that every dynamic memory allocation is subsequently followed by exactly one explicit deallocation of that memory. Doing so, avoids all the potential problems associated with not deallocating memory.

Adding to what has already been written in other answers, I would mention some more reasons to free all objects allocated during the program execution before exiting to the OS:

  • If you run the program under a memory checker such as valgrind or purify , it will tell you if indeed all objects have been freed. Any objects still allocated may indicate memory leaks in the program: objects that internal routines have lost track of and forgot to free in due time. Such memory leaks can lead to program failures if they happen in repetitive tasks and cause the memory allocator to run out of space.

  • If the allocated objects have been corrupted, trying to free them all may cause undefined behavior, hopefully segmentation failures, which are extra chances to identify and correct bugs.

This process may be costly and is not necessary for most environments, so one can make it optional, via a command line argument or an environment variable so as to use it in beta and debugging sessions and skip it in production.

Note however that some complex data-structures may be impossible to free without disproportionate efforts or extra space overhead. For short-lived executables running under any modern OS, this is not a real problem.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM