简体   繁体   中英

Why do I need to delete[]?

Lets say I have a function like this:

int main()
{
    char* str = new char[10];

    for(int i=0;i<5;i++)
    {
        //Do stuff with str
    }

    delete[] str;
    return 0;
}
  1. Why would I need to delete str if I am going to end the program anyways? I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?

  2. Is it just good practice?

  3. Does it have deeper consequences?

If in fact your question really is "I have this trivial program, is it OK that I don't free a few bytes before it exits?" the answer is yes, that's fine. On any modern operating system that's going to be just fine. And the program is trivial; it's not like you're going to be putting it into a pacemaker or running the braking systems of a Toyota Camry with this thing. If the only customer is you then the only person you can possibly impact by being sloppy is you.

The problem then comes in when you start to generalize to non-trivial cases from the answer to this question asked about a trivial case.

So let's instead ask two questions about some non-trivial cases.

I have a long-running service that allocates and deallocates memory in complex ways, perhaps involving multiple allocators hitting multiple heaps. Shutting down my service in the normal mode is a complicated and time-consuming process that involves ensuring that external state -- files, databases, etc -- are consistently shut down. Should I ensure that every byte of memory that I allocated is deallocated before I shut down?

Yes, and I'll tell you why. One of the worst things that can happen to a long-running service is if it accidentally leaks memory. Even tiny leaks can add up to huge leaks over time. A standard technique for finding and fixing memory leaks is to instrument the allocation heaps so that at shutdown time they log all the resources that were ever allocated without being freed. Unless you like chasing down a lot of false positives and spending a lot of time in the debugger, always free your memory even if doing so is not strictly speaking necessary.

The user is already expecting that shutting the service down might take billions of nanoseconds so who cares if you cause a little extra pressure on the virtual allocator making sure that everything is cleaned up? This is just the price you pay for big complicated software. And it's not like you're shutting down the service all the time, so again, who cares if its a few milliseconds slower than it could be?

I have that same long-running service. If I detect that one of my internal data structures is corrupt I wish to "fail fast". The program is in an undefined state, it is likely running with elevated privileges, and I am going to assume that if I detect corrupted state, it is because my service is actively being attacked by hostile parties. The safest thing to do is to shut down the service immediately. I would rather allow the attackers to deny service to the clients than to risk the service staying up and compromising my users' data further. In this emergency shutdown scenario should I make sure that every byte of memory I allocated is freed?

Of course not. The operating system is going to take care of that for you. If your heap is corrupt, the attackers may be hoping that you free memory as part of their exploit. Every millisecond counts. And why would you bother polishing the doorknobs and mopping the kitchen before you drop a tactical nuke on the building?

So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".

Yes it is good practice. You should NEVER assume that your OS will take care of your memory deallocation, if you get into this habit, it will screw you later on.

To answer your question, however, upon exiting from the main, the OS frees all memory held by that process, so that includes any threads that you may have spawned or variables allocated. The OS will take care of freeing up that memory for others to use.

Important note : delete 's freeing of memory is almost just a side-effect. The important thing it does is to destruct the object. With RAII designs, this could mean anything from closing files, freeing OS handles, terminating threads, or deleting temporary files.

Some of these actions would be handled by the OS automatically when your process exits, but not all.

In your example, there's no reason NOT to call delete . but there's no reason to call new either, so you can sidestep the issue this way.

char str[10];

Or, you can sidestep the delete (and the exception safety issues involved) by using smart pointers...

So, generally you should always be making sure your object's lifetime is properly managed.

But it's not always easy: Workarounds for the static initialization order fiasco often mean that you have no choice but to rely on the OS cleaning up a handful of singleton-type objects for you.

Contrary answer: No, it is a waste of time. A program with a vast amount of allocated data would have to touch nearly every page in order to return all of the allocations to the free list. This wastes CPU time, creates memory pressure for uninteresting data, and possibly even causes the process to swap pages back in from disk. Simply exiting releases all of the memory back to the OS without any further action.

(not that I disagree with the reasons in "Yes", I just think there are arguments both ways)

Your Operating System should take care of the memory and clean it up when you exit your program, but it is in general good practice to free up any memory you have reserved. I think personally it is best to get into the correct mindset of doing so, as while you are doing simple programs, you are most likely doing so to learn.

Either way, the only way to guaranteed that the memory is freed up is by doing so yourself.

new and delete are reserved keyword brothers. They should cooperate with each other through a code block or through the parent object's lifecyle. Whenever the younger brother commits a fault ( new ), the older brother will want to to clean ( delete ) it up. Then the mother (your program) will be happy and proud of them.

I cannot agree more to Eric Lippert's excellent advice:

So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".

Other answers here have provided arguments for and against both, but the real crux of the matter is what your program does . Consider a more non-trivial example wherein the type instance being dynamically allocated is an custom class and the class destructor performs some actions which produces side effect . In such a situation the argument of memory leaks or not is trivial the more important problem is that failing to call delete on such a class instance will result in Undefined behavior.

[basic.life] 3.8 Object lifetime
Para 4:

A program may end the lifetime of any object by reusing the storage which the object occupies or by explicitly calling the destructor for an object of a class type with a non-trivial destructor. For an object of a class type with a non-trivial destructor, the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released; however, if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior.

So the answer to your question is as Eric says "depends on what your program does"

It's a fair question, and there are a few things to consider when answering:

  • some objects have more complex destructors which don't just release memory when they're deleted. They may have other side effects, which you don't want to skip.
  • It is not guaranteed by the C++ standard that your memory will be released when the process terminates. (Of course on a modern OS it will be freed, but if you were on some weird OS which didn't do that, you'd have to free your memory properly
  • on the other hand, running destructors at program exit can actually take up quite a lot of time, and if all the do is release memory (which would be released anyway), then yes, it makes a lot of sense to just short-circuit that and exit immediately instead.

Most operating systems will reclaim memory upon process exit. Exceptions may include certain RTOS's, old mobile devices etc.

In an absolute sense your app won't leak memory; however it's good practice to clean up memory you allocate even if you know it won't cause a real leak. This issue is leaks are much, much harder to fix than not having them to begin with. Let's say you decide that you want to move the functionality in your main() to another function. You'll may end up with a real leak.

It's also bad aesthetics, many developers will look at the unfreed 'str' and feel slight nausea :(

You got a lot of professional experience answers. Here I'm telling a naive but an answer I considered as the fact.

  • Summary

    3. Does it have deeper consequences?

    A: Will answer in some detail.

    2. Is it just good practice?

    A: It is considered a good practice. Release resources/memory you've retrieved when you're sure about it no longer used.

    1. Why would I need to delete str if I am going to end the program anyways?
      I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?

    A: You need or need not, in fact, you tell why. There're some explanation follows.

    I think it depends. Here are some assumed questions; the term program may mean either an application or a function.

    Q: Does it depend on what the program does?

    A: If universe destroyed was acceptable, then no. However, the program might not work correctly as expected, and even be a program that doesn't complete what it supposed to. You might want to seriously think about why you build a program like this?

    Q: Does it depend on how the program is complicated?

    A: No. See Explanation.

    Q: Does it depend on what the stability of the program is expected?

    A: Closely.

    And I consider it depends on

    1. What's the universe of the program?
    2. How's the expectation of the program that it done its work?
    3. How much does the program care about others, and the universe where it is?

      About the term universe , see Explanation.

    For summary, it depends on what do you care about.


  • Explanation

    Important: If we define the term program as a function, then its universe is application . There are many details omitted; as an idea for understanding, it's long enough, though.

    We may ever seen this kind of diagram which illustrates the relationship between application software and system software.

    9RJKM.gif

    But for being aware the scope of which one covers, I'd suggest a reversed layout. Since we are talking about software only, the hardware layer is omitted in following diagram.

    mjLai.jpg

    With this diagram, we realize that the OS covers the biggest scope, which is current the universe , sometimes we say the environment . You may imagine that the whole achitecture consists of a lot of disks like the diagram, to be either a cylinder or torus(a ball is fine but difficult to imagine). Here I should mention that the outmost of OS is in fact a unibody , the runtime may be either single or multiple by different implemention.

    It's important that the runtime is responsible to both OS and applications, but the latter is more critical. Runtime is the universe of applications, if it's destroyed all applications run under it are gone.

    Unlike human on the Earth, we live here, but we don't consist of Earth; we will still live in other suitable environment if the time when the Earth are destroying and we weren't there.

    However, we can no longer exist when the universe is destroyed, because we are not only live in the universe, but also consist of the universe.

    As mentioned above, the runtime is also responsible to the OS. The left circle in the following diagram is what it may looks like.

    ScsZs.jpg

    This is mostly like a C program in the OS. When the relationship between an application and OS match this, is just the same situation as runtime in the OS above. In this diagram, the OS is the universe of applications. The reason of the applications here should be responsible to the OS, is that OS might not virtualize the code of them, or allowed to be crashed. If OS are always prevent they to do so, then it's self-responsible, no matter what applications do. But think about the drivers , it's one of the scenarios that OS must allowed to be crashed, since this kind of applications are treated as part of OS .

    Finally, let's see the right circle in the diagram above. In this case, the application itself be the universe. Sometimes, we call this kind of application operating system . If an OS never allowed custom code to be loaded and run, then it does all the things itself. Even it allowed, after itself is terminated, the memory goes nowhere but hardware . All the deallocation that may be necessary, is before it was terminated.

    So, how much does your program care about the others? How much does it care about its universe? And how's the expectation of the program that it done its work? It depends on what do you care about .

Why would I need to delete str if I am going to end the program anyways?

Because you don't want to be lazy ...

I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?

Nope, I don't care about the land of unicorns either. The Land of Arwen is a different matter, Then we could cut their horns off and put them to good use(I've heard its a good aphrodisiac).

Is it just good practice?

It is justly a good practice.

Does it have deeper consequences?

Someone else has to clean up after you. Maybe you like that, I moved out from under my parents' roof many years ago.

Place a while(1) loop construct around your code without delete. The code-complexity does not matter. Memory leaks are related to process time.

From the perspective of debug, not releasing system resources(file handles, etc) can cause more significant and hard to find bugs. Memory-leaks while important are typically much easier to diagnose( why can't I write to this file? ). Bad style will become more of a problem if you start working with threads.

int main()
{

    while(1)
    { 
        char* str = new char[10];

        for(int i=0;i<5;i++)
        {
            //Do stuff with str
        }
    }

    delete[] str;
    return 0;
}

TECHNICALLY , a programmer shouldn't rely on the OS to do any thing. The OS isn't required to reclaim lost memory in this fashion.

If you do write the code that deletes all your dynamically allocated memory, then you are future proofing the code and letting others use it in a larger project.

Source: Allocation and GC Myths (PostScript alert!)

Allocation Myth 4: Non-garbage-collected programs should always
deallocate all memory they allocate.

The Truth: Omitted deallocations in frequently executed code cause
growing leaks. They are rarely acceptable. but Programs that retain
most allocated memory until program exit often perform better without
any intervening deallocation. Malloc is much easier to implement if
there is no free.

In most cases, deallocating memory just before program exit is
pointless. The OS will reclaim it anyway. Free will touch and page in
the dead objects; the OS won't.

Consequence: Be careful with "leak detectors" that count allocations.
Some "leaks" are good!
  • I think it's a very poor practice to use malloc/new without calling free/delete.

  • If the memory's going to get reclaimed anyway, what harm can there be from explicitly deallocating when you need to?

  • Maybe if the OS "reclaims" memory faster than free does then you'll see increased performance; this technique won't help you with any program that must remain running for any long period of time.

Having said that, so I'd recommend you use free/delete.


If you get into this habit, who's to say that you won't one day accidentally apply this approach somewhere it matters?


One should always deallocate resources after one is done with them, be it file handles/memory/mutexs. By having that habit, one will not make that sort of mistake when building servers. Some servers are expected to run 24x7. In those cases, any leak of any sort means that your server will eventually run out of that resource and hang/crash in some way. A short utility program, ya a leak isn't that bad. Any server, any leak is death. Do yourself a favor. Clean up after yourself. It's a good habit.


Think about your class 'A' having to deconstruct. If you don't call
'delete' on 'a', that destructor won't get called. Usually, that won't
really matter if the process ends anyway. But what if the destructor
has to release e.g. objects in a database? Flush a cache to a logfile?
Write a memory cache back to disk? **You see, it's not just 'good
practice' to delete objects, in some situations it is required**.

Another reason that I haven't see mentioned yet is to keep the output of static and dynamic analyzer tools (eg valgrind or Coverity) cleaner and quieter. Clean output with zero memory leaks or zero reported issues means that when a new one pops up it is easier to detect and fix.

You never know how your simple example will be used or evolved. Its better to start as clean and crisp as possible.

Not to mention that if you are going to apply for a job as a C++ programmer there is a very good chance that you won't get past the interview because of the missing delete. First - programmers don't like any leaks usually (and the guy at the interview will be surely one of them) and second - most companies (all I worked in at least) have the "no-leak" policy. Generally, the software you write is supposed to run for quite a while, creating and destroying objects on the go. In such an environment leaks can lead to disasters...

Instead of talking about this specific example i will talk about general cases, so generally it is important to explicitly call delete to de-allocate memory because (in case of C++) you may have some code in the destructor that you want to execute. Like maybe writing some data to a log file or sending shutting down signal to some other process etc. If you let the OS free your memory for you, your code in your destructor will not be executed.

On the other hand most operating systems will deallocate the memory when your program ends. But it is good practice to deallocate it yourself and like I gave destructor example above the OS won't call your destructor, which can create undesirable behavior in certain cases!

I personally consider it bad practice to rely on OS to free your memory (even though it will do) the reason is if later on you have to integrate your code with a larger program you will spend hours to track down and fix memory leaks!

So clean your room before leaving!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM