简体   繁体   English

Java IOException“打开的文件太多”

[英]Java IOException "Too many open files"

I'm doing some file I/O with multiple files (writing to 19 files, it so happens).我正在对多个文件进行一些文件 I/O(写入 19 个文件,它确实如此)。 After writing to them a few hundred times I get the Java IOException : Too many open files .在给他们写了几百次之后,我得到了 Java IOException : Too many open files But I actually have only a few files opened at once.但实际上我一次只打开了几个文件。 What is the problem here?这里有什么问题? I can verify that the writes were successful.我可以验证写入是否成功。

On Linux and other UNIX / UNIX-like platforms, the OS places a limit on the number of open file descriptors that a process may have at any given time.在 Linux 和其他 UNIX / 类 UNIX 平台上,操作系统对进程在任何给定时间可能拥有的打开文件描述符的数量进行了限制。 In the old days, this limit used to be hardwired 1 , and relatively small.在过去,这个限制曾经是硬连线1 ,而且相对较小。 These days it is much larger (hundreds / thousands), and subject to a "soft" per-process configurable resource limit.现在它要大得多(成百上千),并且受到每个进程的“软”可配置资源限制。 (Look up the ulimit shell builtin ...) (查找内置的ulimit shell ...)

Your Java application must be exceeding the per-process file descriptor limit.您的 Java 应用程序必须超出每个进程的文件描述符限制。

You say that you have 19 files open, and that after a few hundred times you get an IOException saying "too many files open".你说你打开了 19 个文件,几百次后你得到一个 IOException 说“打开的文件太多”。 Now this particular exception can ONLY happen when a new file descriptor is requested;现在这个特殊的异常只会在请求一个新的文件描述符时发生; ie when you are opening a file (or a pipe or a socket).即当您打开文件(或管道或套接字)时。 You can verify this by printing the stacktrace for the IOException.您可以通过打印 IOException 的堆栈跟踪来验证这一点。

Unless your application is being run with a small resource limit (which seems unlikely), it follows that it must be repeatedly opening files / sockets / pipes, and failing to close them.除非您的应用程序以较小的资源限制运行(这似乎不太可能),否则它必须重复打开文件/套接字/管道,但无法关闭它们。 Find out why that is happening and you should be able to figure out what to do about it.找出发生这种情况的原因,您应该能够弄清楚该怎么做。

FYI, the following pattern is a safe way to write to files that is guaranteed not to leak file descriptors.仅供参考,以下模式是一种写入文件的安全方式,可保证不会泄漏文件描述符。

Writer w = new FileWriter(...);
try {
    // write stuff to the file
} finally {
    try {
        w.close();
    } catch (IOException ex) {
        // Log error writing file and bail out.
    }
}

1 - Hardwired, as in compiled into the kernel. 1 - 硬连线,如编译到内核中。 Changing the number of available fd slots required a recompilation ... and could result in less memory being available for other things.更改可用 fd 插槽的数量需要重新编译......并且可能导致更少的内存可用于其他事情。 In the days when Unix commonly ran on 16-bit machines, these things really mattered.在 Unix 通常在 16 位机器上运行的日子里,这些事情真的很重要。

UPDATE更新

The Java 7 way is more concise: Java 7 的方式更简洁:

try (Writer w = new FileWriter(...)) {
    // write stuff to the file
} // the `w` resource is automatically closed 

UPDATE 2更新 2

Apparently you can also encounter a "too many files open" while attempting to run an external program.显然,您在尝试运行外部程序时也会遇到“打开的文件太多”。 The basic cause is as described above.基本原因如上所述。 However, the reason that you encounter this in exec(...) is that the JVM is attempting to create "pipe" file descriptors that will be connected to the external application's standard input / output / error.但是,您在exec(...)遇到此问题的原因是 JVM 正在尝试创建将连接到外部应用程序的标准输入/输出/错误的“管道”文件描述符。

For UNIX:对于 UNIX:

As Stephen C has suggested, changing the maximum file descriptor value to a higher value avoids this problem.正如 Stephen C 所建议的,将最大文件描述符值更改为更高的值可以避免此问题。

Try looking at your present file descriptor capacity:尝试查看您当前的文件描述符容量:

   $ ulimit -n

Then change the limit according to your requirements.然后根据您的要求更改限制。

   $ ulimit -n <value>

Note that this just changes the limits in the current shell and any child / descendant process.请注意,这只会更改当前 shell 和任何子/后代进程中的限制。 To make the change "stick" you need to put it into the relevant shell script or initialization file.要使更改“坚持”,您需要将其放入相关的 shell 脚本或初始化文件中。

You're obviously not closing your file descriptors before opening new ones.在打开新文件描述符之前,您显然没有关闭文件描述符。 Are you on windows or linux?你是windows还是linux?

Although in most general cases the error is quite clearly that file handles have not been closed, I just encountered an instance with JDK7 on Linux that well... is sufficiently ****ed up to explain here.尽管在大多数一般情况下,错误很明显是文件句柄尚未关闭,但我刚刚在 Linux 上遇到了一个带有 JDK7 的实例……这足以解释这里。

The program opened a FileOutputStream (fos), a BufferedOutputStream (bos) and a DataOutputStream (dos).该程序打开了一个 FileOutputStream (fos)、一个 BufferedOutputStream (bos) 和一个 DataOutputStream (dos)。 After writing to the dataoutputstream, the dos was closed and I thought everything went fine.写入数据输出流后,dos 关闭,我认为一切正常。

Internally however, the dos, tried to flush the bos, which returned a Disk Full error.然而,在内部,dos 试图刷新 bos,它返回磁盘已满错误。 That exception was eaten by the DataOutputStream, and as a consequence the underlying bos was not closed, hence the fos was still open.该异常被 DataOutputStream 吃掉了,因此底层 bos 没有关闭,因此 fos 仍然打开。

At a later stage that file was then renamed from (something with a .tmp) to its real name.在稍后阶段,该文件从(带有 .tmp 的东西)重命名为其真实名称。 Thereby, the java file descriptor trackers lost track of the original .tmp, yet it was still open !因此,java 文件描述符跟踪器丢失了原始 .tmp 的跟踪,但它仍然是打开的!

To solve this, I had to first flush the DataOutputStream myself, retrieve the IOException and close the FileOutputStream myself.为了解决这个问题,我必须首先自己刷新 DataOutputStream,检索 IOException 并自己关闭 FileOutputStream。

I hope this helps someone.我希望这可以帮助别人。

If you're seeing this in automated tests: it's best to properly close all files between test runs.如果您在自动化测试中看到这一点:最好在测试运行之间正确关闭所有文件。

If you're not sure which file(s) you have left open, a good place to start is the "open" calls which are throwing exceptions!如果你不确定你打开了哪些文件,一个好的开始是抛出异常的“打开”调用! 😄 😄

If you have a file handle should be open exactly as long as its parent object is alive, you could add a finalize method on the parent that calls close on the file handle.如果您有一个文件句柄,只要其父对象还活着,就应该打开它,您可以在父对象上添加一个finalize方法,该方法在文件句柄上调用 close。

Recently, I had a program batch processing files, I have certainly closed each file in the loop, but the error still there.最近有一个程序批处理文件,我当然已经关闭了循环中的每个文件,但错误仍然存​​在。

And later, I resolved this problem by garbage collect eagerly every hundreds of files:后来,我通过每隔数百个文件进行垃圾收集来解决这个问题:

int index;
while () {
    try {
        // do with outputStream...
    } finally {
        out.close();
    }
    if (index++ % 100 = 0)
        System.gc();
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 IOException:打开的文件太多 - IOException: Too many open files java.io.IOException:无效的参数和太多打开的文件 - java.io.IOException: Invalid argument and Too many open files Jetty IOException:打开的文件太多 - Jetty IOException: Too many open files java.net.BindException:无法分配请求的地址,java.io.IOException:打开的文件太多 - java.net.BindException: Cannot assign requested address, java.io.IOException: Too many open files java.io.IOException:无法运行程序“ / bin / bash”:error = 24,打开的文件太多 - java.io.IOException: Cannot run program “/bin/bash”: error=24, Too many open files Java Android Ping IOException启动进程时“打开太多文件” - Java Android ping IOException “Too many files open” when starting a process “ffmpeg”:java.io.IOException:error = 24,打开的文件过多 - “ffmpeg”: java.io.IOException: error=24, Too many open files Spring Boot - 许多无效请求和Socket接受失败java.io.IOException:打开的文件太多 - Spring Boot - Lot of invalid requests and Socket accept failed java.io.IOException: Too many open files Java中打开的文件太多 - Too many open files in Java Elasticsearch Rest客户端仍然给IOException:打开文件太多 - Elasticsearch Rest Client Still Giving IOException : Too Many Open Files
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM