简体   繁体   中英

Reading the output of Pythons memory_profiler

I have a problem with understanding the memory_profiler s output. Basically, it looks like this:

Filename: tspviz.py

Line #    Mem usage    Increment   Line Contents
================================================
     7  34.589844 MiB  34.589844 MiB   @profile(precision=6)
     8                             def parse_arguments():
     9  34.917969 MiB   0.328125 MiB       a = [x**2 for x in range(10000)]

On the 9th line we can clearly see, that we use some memory. Now, I measured the size of this list with sys.getsizeof() . And I double checked if it is in fact a list of ints:

print(sys.getsizeof(a))
print(type(a[0]))

And this is what I got:

87624
<class 'int'>

Well, now there's a problem. As I was checking, fe the int in Python is of size 28 on my 64-bit Windows machine. I don't know if that's correct. But even so. 10000 * 28 = 0.28 MB. And 0.28 MB = 0.267028809 MiB (the output from the memory_profiler is displaying MiB). Now the problem is, that in the table there is 0.328125 MiB , so the difference is 0.061096191 MB.

My concern here is, well, is it really that big amount of memory needed to construct a list in Python, or am I interpreting something in a wrong way?

And PS: Why, when this a list was of length 1000000 , the number in the Increment column for this line, when I was creating it, was like -9xxx MiB? I mean why the negative number?

Python lists don't store the objects themselves, but rather references to objects. The 64-bit version of Python uses 8 bytes per reference, thus 10000 ints requires 80000 bytes. From your example, sys.getsizeof(a) returned 87624 because for efficiency, lists allocate extra space proportional to their size. See this post for more .

The space taken by an int varies depending on how large it is, but int s up to 2^30-1 do seem to take 28 bytes on 64-bit Python (except 0 , which takes only 24 bytes). So in total, the size taken by the list is 87624 + 279996 = 367620 bytes, which is about 0.35 MiB .

The discrepancy between this and the output from memory_profiler is probably due to this :

This module gets the memory consumption by querying the operating system kernel about the amount of memory the current process has allocated, which might be slightly different from the amount of memory that is actually used by the Python interpreter. Also, because of how the garbage collector works in Python the result might be different between platforms and even between runs.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM