[英]Python memory consumption of objects and process
我写了以下代码:
from hurry.size import size
from pysize import get_zise
import os
import psutil
def load_objects():
process = psutil.Process(os.getpid())
print "start method"
process = psutil.Process(os.getpid())
print "process consumes " + size(process.memory_info().rss)
objects = make_a_call()
print "total size of objects is " + (get_size(objects))
print "process consumes " + size(process.memory_info().rss)
print "exit method"
def main():
process = psutil.Process(os.getpid())
print "process consumes " + size(process.memory_info().rss)
load_objects()
print "process consumes " + size(process.memory_info().rss)
get_size()
使用此代码返回对象的内存消耗。
我得到以下照片:
process consumes 21M
start method
total size of objects is 20M
process consumes 29M
exit method
process consumes 29M
对象永远不会被明确销毁; 然而,当它们变得无法到达时,它们可能被垃圾收集。 允许实现推迟垃圾收集或完全省略垃圾收集 - 只要没有收集到仍然可以访问的对象,实现垃圾收集的实现方式就是如此。
CPython实现细节:CPython目前使用引用计数方案和(可选)延迟检测循环链接垃圾,一旦它们变得无法访问就收集大多数对象,但不保证收集包含循环引用的垃圾。 有关控制循环垃圾收集的信息,请参阅gc模块的文档。 其他实现的行为不同,CPython可能会改变。 当对象无法访问时,不要依赖于对象的立即终结(因此您应该始终明确地关闭文件)。
这是一个完全有效的(python 2.7)示例,它有同样的问题(为了简单起见,我稍微更新了原始代码)
from hurry.filesize import size
from pysize import get_size
import os
import psutil
def make_a_call():
return range(1000000)
def load_objects():
process = psutil.Process(os.getpid())
print "start method"
process = psutil.Process(os.getpid())
print"process consumes ", size(process.memory_info().rss)
objects = make_a_call()
# FIXME
print "total size of objects is ", size(get_size(objects))
print "process consumes ", size(process.memory_info().rss)
print "exit method"
def main():
process = psutil.Process(os.getpid())
print "process consumes " + size(process.memory_info().rss)
load_objects()
print "process consumes " + size(process.memory_info().rss)
main()
这是输出:
process consumes 7M
start method
process consumes 7M
total size of objects is 30M
process consumes 124M
exit method
process consumes 124M
差异是~100Mb
这是代码的固定版本:
from hurry.filesize import size
from pysize import get_size
import os
import psutil
def make_a_call():
return range(1000000)
def load_objects():
process = psutil.Process(os.getpid())
print "start method"
process = psutil.Process(os.getpid())
print"process consumes ", size(process.memory_info().rss)
objects = make_a_call()
print "process consumes ", size(process.memory_info().rss)
print "total size of objects is ", size(get_size(objects))
print "exit method"
def main():
process = psutil.Process(os.getpid())
print "process consumes " + size(process.memory_info().rss)
load_objects()
print "process consumes " + size(process.memory_info().rss)
main()
这是更新的输出:
process consumes 7M
start method
process consumes 7M
process consumes 38M
total size of objects is 30M
exit method
process consumes 124M
你发现了差异吗? 您在测量最终过程大小之前计算对象大小,这会导致额外的内存消耗。 让我们检查它为什么会发生 - 这是https://github.com/bosswissam/pysize/blob/master/pysize.py的来源:
import sys
import inspect
def get_size(obj, seen=None):
"""Recursively finds size of objects in bytes"""
size = sys.getsizeof(obj)
if seen is None:
seen = set()
obj_id = id(obj)
if obj_id in seen:
return 0
# Important mark as seen *before* entering recursion to gracefully handle
# self-referential objects
seen.add(obj_id)
if hasattr(obj, '__dict__'):
for cls in obj.__class__.__mro__:
if '__dict__' in cls.__dict__:
d = cls.__dict__['__dict__']
if inspect.isgetsetdescriptor(d) or inspect.ismemberdescriptor(d):
size += get_size(obj.__dict__, seen)
break
if isinstance(obj, dict):
size += sum((get_size(v, seen) for v in obj.values()))
size += sum((get_size(k, seen) for k in obj.keys()))
elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
size += sum((get_size(i, seen) for i in obj))
return size
这里发生了很多事情! 最值得注意的是它包含它在集合中看到的所有对象以解析循环引用。 如果删除该行,则无论如何都不会占用太多内存。
如果你创建一个大对象并再次删除它,Python可能已经释放了内存,但所涉及的内存分配器不一定会将内存返回给操作系统,所以看起来好像Python进程使用了更多的虚拟内存而不是它实际使用。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.