简体   繁体   中英

Bulk memory free of fragmented stl containers

Currently, when we want to destruct a very large nested list/map of complex objects with very fragmented memory allocations, I assume C++ is to invoke destructors and free the memory one by one and recursively which takes lots of time and is inefficient?

In my case, I find it sometimes takes 1 min or more to destruct a 300GB object.

The operating system can kill a process taking lots of memory efficiently because it just free all the memory without considering much about the logic inside process.

I am wondering if there is any existing library for C / C++ can do just that? To provide a customized memory allocator maintaining an id system? Such that if I specify an id to create an allocator for a given large STL container (and its elements). When I want to destruct it, I can free all the memory allocated with a specified id, and just discard the pointer to the outer container (and it will skip all the destructors) ? Just like we can "kill" a pid...

Thanks!

This can be done through a pool allocator and placement new, of course you will have some limits, like finding a common size for your slot in the pool (if you don't want fine granularity) but in general a simple case scenario as the following:

struct Foo {
  double x, y;
  Foo(double x, double y) { this->x = x; this->y = y; };
};

std::byte* buffer = new std::byte[sizeof(Foo) * 10];

Foo* foo1 = new(buffer) Foo(1.0, 2.0);
Foo* foo2 = new(buffer + sizeof(Foo)) Foo(1.0, 2.0);

delete[] buffer;

explains the basic principle. This must be done with precautions though, since no one is calling your destructor (and you should do it manually through foo1->~Foo() ). But if the destructor has no side effects or you can take care of them at once then you are allowed by the standard not to explicitly call it.

Now the tricky part is the fact that if you are using STL collection then they internally do a lot of allocations to store their needs (especially containers like std::map or std::list ). So you'd need to write a custom allocator<T> which wraps an efficient pooling scheme.

If you want efficient freeing of memory, doing a single delete is the way to go. Though keep in mind that freeing the memory ain't the only thing done by the delete call. It also calls a destructor. If not trivial or not visible, your compiler still has to call it via a function call.

That said, use std::vector where possible. I've already written custom sets and maps on top of vector, with less functionality (no removing), to gain memory and performance.

If you have a lot of small objects, like a vector that usually takes 1, 2, ... 16 elements, you can gain speed by using more memory. The boost small vector and other containers can help you to not allocate. Using this in algorithms, already saved me remarkable percentages (>90%) on real world code.

Finally, you can't always win. If you can estimate the memory usage, or already something close by, you can use the [ https://howardhinnant.github.io/stack_alloc.html](stack allocator of Howard Hinnant). Don't be fooled by the name, you can also allocate the memory on the heap. With a few manipulations, I suspect you should be able to change it to accept a runtime size instead. It ain't perfect, though, you could cover a big part.

That said, you can always create a memory leak on purpose, though that might remove the side effects of the destructor. You could extract the nodes out of your map, and store them in an allocated vector. This might be leaping into UB, I'm not an expert on that.

Oh, and finally, you could inherit from the standard allocator, and only override the deallocate function. Look at a global to decide if you want to call the actual deallocate. Flip the switch, and do the memory leaking at exit.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM