简体   繁体   中英

valgrind/memcheck fails to release “large” memory chunks

Consider this small program:

#include <stdio.h>
#include <stdlib.h>

// Change 60000 to 70000 and valgrind (memcheck) eats my memory
#define L (60000)
#define M (100*(1<<20))

int main(void) {
  int i;
  for (i = 0; i < M; ++i) {
    unsigned char *a = malloc(L);
    a[i % L] = i % 128; // Touch something; a[0] is not enough
    free(a);
    if (i % (1<<16) == 0)
      fprintf(stderr, "i = %d\n", i);
  }
  return 0;
}

Compiling with gcc -o vg and running valgrind --leak-check=full ./vg works fine, with memcheck using roughly 1.5% of my memory. However, changing L to 70000 (I suppose the magic limit is 1<<16), memcheck uses an ever-increasing amount of memory, until the kernel finally kills it.

Is there anything one can do about this? There is obviously no leak, but there appears to be one in valgrind itself (!?), making it difficult to use for checking programs with lots of large and short-lived allocations.

Some background, not sure which is relevant:

$ valgrind --version
valgrind-3.7.0
$ gcc --version
gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3)
$ /lib/libc.so.6
GNU C Library stable release version 2.12, by Roland McGrath et al.
$ uname -rms
Linux 2.6.32-220.2.1.el6.x86_64 x86_64

This is very likely caused by a gcc 4.4 bug, which is bypassed in valgrind 3.8.0 (not yet released)

extract from Valgrind 3.8.0 NEWS file:

ni-bz Bypass gcc4.4/4.5 wrong code generation causing out of memory or asserts

Set the resource limit of your process to unlimited using setrlimit So that kernel won't kill your process if you exceed men limit. And thus kernel think you are okay extending into the virtual address space.

Hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM