简体   繁体   English

python中打开的文件太多

[英]Too many open files in python

I wrote kind of a test suite which is heavily file intensive. 我写了一种测试套件,它占用大量文件。 After some time (2h) I get an IOError: [Errno 24] Too many open files: '/tmp/tmpxsqYPm' . 一段时间(2h)之后,我收到IOError: [Errno 24] Too many open files: '/tmp/tmpxsqYPm' I double checked all file handles whether I close them again. 我仔细检查了所有文件句柄是否再次关闭它们。 But the error still exists. 但是错误仍然存​​在。

I tried to figure out the number of allowed file descriptors using resource.RLIMIT_NOFILE and the number of currently opened file desciptors: 我试图使用resource.RLIMIT_NOFILE找出允许的文件描述符的数量以及当前打开的文件描述符的数量:

def get_open_fds():

    fds = []
    for fd in range(3,resource.RLIMIT_NOFILE):
            try:
                    flags = fcntl.fcntl(fd, fcntl.F_GETFD)
            except IOError:
                    continue

            fds.append(fd)

    return fds

So if I run the following test: 因此,如果我运行以下测试:

print get_open_fds()
for i in range(0,100):
    f = open("/tmp/test_%i" % i, "w")
    f.write("test")
    print get_open_fds()

I get this output: 我得到以下输出:

[]
/tmp/test_0
[3]
/tmp/test_1
[4]
/tmp/test_2
[3]
/tmp/test_3
[4]
/tmp/test_4
[3]
/tmp/test_5
[4] ...

That's strange, I expected an increasing number of opened file descriptors. 真奇怪,我期望打开的文件描述符会越来越多。 Is my script correct? 我的脚本正确吗?

I'm using python's logger and subprocess. 我正在使用python的记录器和子进程。 Could that be the reason for my fd leak? 那可能是我FD泄漏的原因吗?

Thanks, Daniel 谢谢,丹尼尔

The corrected code is: 正确的代码是:

import resource
import fcntl
import os

def get_open_fds():
    fds = []
    soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
    for fd in range(0, soft):
        try:
            flags = fcntl.fcntl(fd, fcntl.F_GETFD)
        except IOError:
            continue
        fds.append(fd)
    return fds

def get_file_names_from_file_number(fds):
    names = []
    for fd in fds:
        names.append(os.readlink('/proc/self/fd/%d' % fd))
    return names

fds = get_open_fds()
print get_file_names_from_file_number(fds)

Your test script overwrites f each iteration, which means that the file will get closed each time. 你的测试脚本将覆盖f每次迭代中,这意味着该文件将每次获得封闭。 Both logging to files and subprocess with pipes use up descriptors, which can lead to exhaustion. 记录到文件和带有管道的subprocess使用了描述符,这可能导致耗尽。

resource.RLIMIT_NOFILE确实为7,但这是对resource.getrlimit()的索引,而不是限制本身的索引... resource.getrlimit(resource.RLIMIT_NOFILE)是您想要的顶级range()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM