简体   繁体   中英

C++ dynamically allocated memory

I don't quite get the point of dynamically allocated memory and I am hoping you guys can make things clearer for me.

First of all, every time we allocate memory we simply get a pointer to that memory.

int * dynInt = new int;

So what is the difference between doing what I did above and:

int someInt;
int* dynInt = &someInt;

As I understand, in both cases memory is allocated for an int, and we get a pointer to that memory.

So what's the difference between the two. When is one method preferred to the other.

Further more why do I need to free up memory with

delete dynInt;

in the first case, but not in the second case.

My guesses are:

  1. When dynamically allocating memory for an object, the object doesn't get initialized while if you do something like in the second case, the object get's initialized. If this is the only difference, is there a any motivation behind this apart from the fact that dynamically allocating memory is faster.

  2. The reason we don't need to use delete for the second case is because the fact that the object was initialized creates some kind of an automatic destruction routine.

Those are just guesses would love it if someone corrected me and clarified things for me.

The difference is in storage duration .

  • Objects with automatic storage duration are your "normal" objects that automatically go out of scope at the end of the block in which they're defined.

    Create them like int someInt ;

    You may have heard of them as "stack objects", though I object to this terminology.

  • Objects with dynamic storage duration have something of a "manual" lifetime; you have to destroy them yourself with delete , and create them with the keyword new .

    You may have heard of them as "heap objects", though I object to this, too.

The use of pointers is actually not strictly relevant to either of them. You can have a pointer to an object of automatic storage duration (your second example), and you can have a pointer to an object of dynamic storage duration (your first example).

But it's rare that you'll want a pointer to an automatic object, because:

  1. you don't have one "by default";
  2. the object isn't going to last very long, so there's not a lot you can do with such a pointer.

By contrast, dynamic objects are often accessed through pointers, simply because the syntax comes close to enforcing it. new returns a pointer for you to use, you have to pass a pointer to delete , and (aside from using references) there's actually no other way to access the object. It lives "out there" in a cloud of dynamicness that's not sitting in the local scope.

Because of this, the usage of pointers is sometimes confused with the usage of dynamic storage, but in fact the former is not causally related to the latter.

An object created like this:

int foo;

has automatic storage duration - the object lives until the variable foo goes out of scope. This means that in your first example, dynInt will be an invalid pointer once someInt goes out of scope (for example, at the end of a function).

An object created like this:

int foo* = new int;

Has dynamic storage duration - the object lives until you explicitly call delete on it.

Initialization of the objects is an orthogonal concept; it is not directly related to which type of storage-duration you use. See here for more information on initialization.

For a single integer it only makes sense if you need the keep the value after for example, returning from a function. Had you declared someInt as you said, it would have been invalidated as soon as it went out of scope.

However, in general there is a greater use for dynamic allocation. There are many things that your program doesn't know before allocation and depends on input. For example, your program needs to read an image file. How big is that image file? We could say we store it in an array like this:

unsigned char data[1000000];

But that would only work if the image size was less than or equal to 1000000 bytes, and would also be wasteful for smaller images. Instead, we can dynamically allocate the memory:

unsigned char* data = new unsigned char[file_size];

Here, file_size is determined at runtime. You couldn't possibly tell this value at the time of compilation.

  1. Your program gets an initial chunk of memory at startup. This memory is called the stack . The amount is usually around 2MB these days.

  2. Your program can ask the OS for additional memory. This is called dynamic memory allocation. This allocates memory on the free store (C++ terminology) or the heap (C terminology). You can ask for as much memory as the system is willing to give (multiple gigabytes).

The syntax for allocating a variable on the stack looks like this:

{
    int a; // allocate on the stack
} // automatic cleanup on scope exit

The syntax for allocating a variable using memory from the free store looks like this:

int * a = new int; // ask OS memory for storing an int
delete a; // user is responsible for deleting the object


To answer your questions:

When is one method preferred to the other.

  1. Generally stack allocation is preferred.
  2. Dynamic allocation required when you need to store a polymorphic object using its base type.
  3. Always use smart pointer to automate deletion:
    • C++03: boost::scoped_ptr , boost::shared_ptr or std::auto_ptr .
    • C++11: std::unique_ptr or std::shared_ptr .

For example:

// stack allocation (safe)
Circle c; 

// heap allocation (unsafe)
Shape * shape = new Circle;
delete shape;

// heap allocation with smart pointers (safe)
std::unique_ptr<Shape> shape(new Circle);

Further more why do I need to free up memory in the first case, but not in the second case.

As I mentioned above stack allocated variables are automatically deallocated on scope exit. Note that you are not allowed to delete stack memory. Doing so would inevitably crash your application.

Whenever you are using new in C++ memory is allocated through malloc which calls the sbrk system call (or similar) itself. Therefore no one, except the OS, has knowledge about the requested size. So you'll have to use delete (which calls free which goes to sbrk again) for giving memory back to the system. Otherwise you'll get a memory leak.

Now, when it comes to your second case, the compiler has knowledge about the size of the allocated memory. That is, in your case, the size of one int . Setting a pointer to the address of this int does not change anything in the knowledge of the needed memory. Or with other words: The compiler is able to take care about freeing of the memory. In the first case with new this is not possible.

In addition to that: new respectively malloc do not need to allocate exactly the requsted size, which makes things a bit more complicated.

Edit

Two more common phrases: The first case is also known as static memory allocation (done by the compiler), the second case refers to dynamic memory allocation (done by the runtime system).

Read more about dynamic memory allocation and also garbage collection

You really need to read a good C or C++ programming book .

Explaining in detail would take a lot of time.

The heap is the memory inside which dynamic allocation (with new in C++ or malloc in C) happens. There are system calls involved with growing and shrinking the heap. On Linux, they are mmap & munmap (used to implement malloc and new etc...).

You can call a lot of times the allocation primitive. So you could put int *p = new int; inside a loop, and get a fresh location every time you loop!

Don't forget to release memory (with delete in C++ or free in C). Otherwise, you'll get a memory leak -a naughty kind of bug-. On Linux, valgrind helps to catch them.

What happens if your program is supposed to let the user store any number of integers? Then you'll need to decide during run-time, based on the user's input, how many ints to allocate, so this must be done dynamically.

In a nutshell, dynamically allocated object's lifetime is controlled by you and not by the language. This allows you to let it live as long as it is required (as opposed to end of the scope), possibly determined by a condition that can only be calculated at run-rime.

Also, dynamic memory is typically much more "scalable" - ie you can allocate more and/or larger objects compared to stack-based allocation.

The allocation essentially "marks" a piece of memory so no other object can be allocated in the same space. De-allocation "unmarks" that piece of memory so it can be reused for later allocations. If you fail to deallocate memory after it is no longer needed, you get a condition known as "memory leak" - your program is occupying a memory it no longer needs, leading to possible failure to allocate new memory (due to the lack of free memory), and just generally putting an unnecessary strain on the system.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM