简体   繁体   English

如何在Windows上使用64位Python调试(可能与C库相关的)内存问题?

[英]How can I debug (potentially C-library related) memory issues using 64-bit Python on Windows?

I have a python program that processes image frames with Python 2.7, PIL, OpenCV, and numpy/scipy. 我有一个python程序,使用Python 2.7,PIL,OpenCV和numpy / scipy处理图像帧。 To the best of my knowledge, it does not maintain any lists of previous frame. 据我所知,它没有维护任何前一帧的列表。 Nevertheless, memory consumption increases steadily as the program processes more and more frames. 然而,随着程序处理越来越多的帧,内存消耗稳步增加。

There are several good discussions of memory profiling solutions for Python, but they seem to focus on 32-bit or Linux solutions. 关于Python的内存分析解决方案有几个 很好的 讨论 ,但它们似乎专注于32位或Linux解决方案。 What should I use with 64-bit Python 2.7 on Windows? 我应该在Windows上使用64位Python 2.7? Initial investigations suggest that the issue is with a C library. 初步调查表明问题出在C库中。 I am particularly interested in tools to help detect C library leaks or experience finding leaks in Python / OpenCV / PIL. 我对帮助检测C库泄漏或在Python / OpenCV / PIL中发现泄漏的工具特别感兴趣。

I've found the tools discussed here very helpful: http://mg.pov.lt/blog/hunting-python-memleaks.html 我发现这里讨论的工具非常有用: http//mg.pov.lt/blog/hunting-python-memleaks.html

There is a version of his code here with some additions for measure numpy array sizes. 有一个版本,他的代码在这里添加了一些新的措施numpy的数组大小。

I had a similar sort of problem tracking down a severe memory leak in a numpy/scipy heavy code where none of the usual Python memory management tools and diagnostics detected the leak or hinted at its source. 我有类似的问题跟踪严重的内存泄漏在一个numpy / scipy重码中,其中没有通常的Python内存管理工具和诊断检测到泄漏或暗示其来源。

In my case, the source of the leak was scipy interface code to the UMFPACK solver package, which was calling a C language initialization routine at every call of the interface object constructor, but never calling the de-initialization routine when the interface object was destroyed, resulting in scratch space and internal allocations leaking at the rate of about 15Mb a call. 在我的例子中,泄漏的来源是UMFPACK解算器包的scipy接口代码,它在接口对象构造函数的每次调用时调用C语言初始化例程,但是当接口对象被销毁时从不调用去初始化例程导致临时空间和内部分配以约15Mb的速率泄漏。 In an application with 10-20k calls, the impact was severe. 在具有10-20k呼叫的应用中,影响很严重。 Because the memory allocation was not done via the Python memory manager, things like heapy could not detect the leak. 因为内存分配不是通过Python内存管理器完成的,所以像heapy这样的东西无法检测到泄漏。

I wound up having to use valgrind + "printf" style debugging to track down the culprit. 我不得不使用valgrind +“printf”式调试来追踪罪魁祸首。 You might need to look at non-python memory use analyzers and instrumentation tools to find out where the leak is coming from. 您可能需要查看非python内存使用分析器和检测工具,以找出泄漏的来源。 I don't work in the Windows environment, and am not familiar with the standard toolchains, so I can't really suggest what to use. 我不在Windows环境中工作,并且不熟悉标准工具链,所以我无法真正建议使用什么。 Perhaps someone else could chip in with some suggestions. 也许其他人可以提出一些建议。

David Malcom did a talk this year at PyCon 2011, called Dude, Where's My RAM? David Malcom今年在PyCon 2011上做了一个名为Dude,Where's My RAM的演讲 . He talks about debugging memory usage in Python as well as showing a tool he developed for analyzing memory usage called gdb-heap which can track memory usage down to individual bytes. 他谈到调试Python中的内存使用情况以及展示他为分析内存使用而开发的工具,称为gdb-heap ,它可以跟踪内存使用情况,直至单个字节。 Really, really great talk. 真的,非常好的谈话。 I would guess it'd be difficult to use gdb-heap on Windows (maybe useful to test on another platform and possibly debug there?), but the talk covers a lot of the common issues, resolutions, etc. 我猜想在Windows上使用gdb-heap很困难(可能在另一个平台上进行测试并可能在那里进行调试很有用?),但这个谈话涵盖了许多常见问题,解决方案等。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM