简体   繁体   English

python如何为列表和字符串分配内存?

[英]How does python allocate memory for list & string?

Here is my test code for python3:这是我的python3测试代码:

#! /usr/bin/python3
from memory_profiler import profile

@profile(precision=10)
def f():
    huge_list = [x for x in range(2000)]
    del huge_list
    print("finish")

if __name__ == "__main__":
    f()

output:输出:

Line #    Mem usage    Increment   Line Contents
================================================
     4  17.2109375000 MiB  17.2109375000 MiB   @profile(precision=10)
     5                             def f():
     6  17.2109375000 MiB   0.0000000000 MiB       huge_list = [x for x in range(2000)]
     7  17.2109375000 MiB   0.0000000000 MiB       del huge_list
     8  17.2226562500 MiB   0.0117187500 MiB       print("finish")

It shows that huge_list = [x for x in range(2000)] doesn't take any memory.它表明huge_list = [x for x in range(2000)]不占用任何内存。
I changed it to huge_list = "aa" * 2000 , it's the same.我把它改成了huge_list = "aa" * 2000 ,还是一样的。
But if I change 2000 to 20000, it gets some memory.但是如果我将 2000 更改为 20000,它会获得一些内存。

why ?为什么 ?

A similar question is here: What does “del” do exactly?这里有一个类似的问题: “del”究竟是做什么的?

I am not sure how exactly things work but I think this is what happens:我不确定事情究竟是如何运作的,但我认为这就是会发生的事情:

  • The memory-profiler does not measure the memory that is actually used by the interpreter, but of the whole process, as stated in the FAQ of memory-profiler : memory-profiler 不测量解释器实际使用的内存,而是测量整个过程,如memory-profiler的常见问题解答中所述:

Q: How accurate are the results ?问:结果有多准确?
A: This module gets the memory consumption by querying the operating system kernel about the amount of memory the current process has allocated, which might be slightly different from the amount of memory that is actually used by the Python interpreter. A:该模块通过向操作系统内核查询当前进程分配的内存量来获取内存消耗,这可能与Python解释器实际使用的内存量略有不同。 Also, because of how the garbage collector works in Python the result might be different between platforms and even between runs.此外,由于垃圾收集器在 Python 中的工作方式,平台之间甚至运行之间的结果可能不同。

  • if the interpreter reserved enough memory beforehand, so that the new list fits in the free reserved memory, then the process does not need more memory to reserve.如果解释器预先预留了足够的内存,以便新列表适合空闲的预留内存,那么进程就不需要更多的内存来预留。 So memory-profiler correctly prints out a change of 0. A list of some 1000 integers is not really huge after all, so I'd not be surprised if the interpreter usually has some free reserved memory dangling around for that.所以 memory-profiler 正确地打印出 0 的变化。毕竟一个包含 1000 个整数的列表并不是很大,所以如果解释器通常有一些空闲的保留内存,我不会感到惊讶。
  • if the required memory for the new huge list does not fit in the already reserved memory, the process needs to reserve more, which the memory-profiler actually sees.如果新的大列表所需的内存不适合已经保留的内存,则进程需要保留更多内存,内存分析器实际上会看到这些内存。
  • abarnert's answer to What does “del” do exactly? abarnert对“del”究竟做了什么回答 is basically saying the same thing.基本上是在说同样的事情。
  • despite all that, the results on my machine are different (using Python 3.8.5, 64 bit (AMD64) on win32 and memory-profiler 0.57.0:尽管如此,我的机器上的结果是不同的(在 win32 和 memory-profiler 0.57.0 上使用 Python 3.8.5,64 位(AMD64):
     Line #    Mem usage    Increment   Line Contents
     ================================================
     4  41.3984375000 MiB  41.3984375000 MiB   @profile(precision=10)
     5                             def f():
     6  41.4023437500 MiB   0.0039062500 MiB       huge_list = [x for x in range(1)]
     7  41.4023437500 MiB   0.0000000000 MiB       del huge_list
     8  41.4101562500 MiB   0.0078125000 MiB       print("finish")

So I guess it depends on the system very much ....所以我想这在很大程度上取决于系统......

EDIT:编辑:

As Lescurel wrote in the questions comments:正如 Lescurel 在问题评论中所写:

You should consider using tracemalloc if you want precise memory tracing of your app.如果您想对应用进行精确的内存跟踪,则应考虑使用tracemalloc

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM