简体   繁体   中英

Heap allocation with valgrind

const size_t size = 10000000;
using T = unsigned short[];
vector<unique_ptr<T>> v;
v.resize(size);
for (size_t n = 0; n != size ; ++n) {
  v[n] = make_unique<T>(3);
  for (int i = 0; i!= 3; ++i)
    v[n][i] = rand();
}

I want to measure how much memory it uses.

  1. What I expect: 10.000.000*(8+2*3) = 140.000.000 bytes. 8 bytes per pointer and 2 bytes for each unsigned short.
  2. What "valgrind -- --tool=memcheck" returns: total heap usage: 10,000,112 allocs, 10,000,111 frees, 140,161,828 bytes allocated
  3. What it actually is: VIRT = 419044, RES = 394180 (KB).

Why the actual size is 3 times bigger than valgrind displays? I run it on VSL, ubuntu.

You are forgetting the overhead of memory allocation itself. Just the new / delete itself: something has to keep track of it. Somewhere.

You have it easy, but the C++ library does all the hard work for you.

You have it easy: you just new some arbitrary number of bytes, and delete it later, and you're done. But it's not as easy for C++ library. It has to know how big each new ed object or objects are. So when they are delete d the C++ library knows how much memory has just been deleted. When adjacent objects in memory get delete d, an allocator will also need to be aware of that and combine the two adjacent delete d objects together into one bigger chunk of memory so that it could, potentially, be used for new ing a larger object, at some point later down the road.

This complexity doesn't come for free. It needs to be tracked and totaled up.

All of this jazz is going to require at least a pointer value, and a byte count value, at a minimum. Per allocation. A robust internal memory allocator may want to store an extra pointer, somewhere, but let's start with just a pointer and a byte count, as a minimalist implementation for a memory allocator.

You are allocating sizeof(unsigned short)*3 bytes at a time, or 6 bytes by my count. On a 64 bit platform, the pointer will need to be 8 bytes long. Let's say you have a smart memory allocator that maintains a separate bool of allocations that don't exceed 64kb in size, so the memory byte count needs only be 2 bytes. So that's an overhead of ten bytes per allocation.

That overhead needs to be stored, kept track, and piled on to every allocation. So, given an additional overhead of, at least, 10 bytes for every 6 bytes allocated, observing 2-3 amount of expected memory used seems to be pretty much in the ballpark. Definitely in the ballpark if the byte count gets tracked, internally, as 4 bytes, or if the memory pool is a doubly-linked list, requiring another 8 bytes (maybe 4 bytes if you're lucky) thrown into the mix.

Valgrind is just reporting what it sees when it intercepts calls to malloc and new .

If you really want to see how much memory is being used, try massif , and also try massif with --pages-as-heap=yes . This argument will also cause massif to record pages mapped with mmap .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM