简体   繁体   English

kafka 打开的文件太多

[英]kafka Too many open files

Have you ever had a similar problem about kafka?你有没有遇到过关于 kafka 的类似问题? I get this error: Too many open files .我收到此错误: Too many open files I don't know why.我不知道为什么。 Here are some logs:以下是一些日志:

[2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180821_1_LOCATION-87/leader-epoch-checkpoint: Too many open fis
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.Files.newByteChannel(Files.java:361)
        at java.nio.file.Files.createFile(Files.java:632)
        at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
        at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
        at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
        at kafka.log.Log.<init>(Log.scala:211)
        at kafka.log.Log$.apply(Log.scala:1748)
        at kafka.log.LogManager.loadLog(LogManager.scala:265)
        at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180822_PARSE-136/leader-epoch-checkpoint: Too many open files
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.Files.newByteChannel(Files.java:361)
        at java.nio.file.Files.createFile(Files.java:632)
        at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
        at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
        at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
        at kafka.log.Log.<init>(Log.scala:211)
        at kafka.log.Log$.apply(Log.scala:1748)
        at kafka.log.LogManager.loadLog(LogManager.scala:265)
        at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2018-08-27 10:07:26,269] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180813_1_STATISTICS-402/leader-epoch-checkpoint: Too many opens
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.Files.newByteChannel(Files.java:361)
        at java.nio.file.Files.createFile(Files.java:632)
        at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
        at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
        at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
        at kafka.log.Log.<init>(Log.scala:211)
        at kafka.log.Log$.apply(Log.scala:1748)
        at kafka.log.LogManager.loadLog(LogManager.scala:265)
        at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

In Kafka, every topic is (optionally) split into many partitions.在 Kafka 中,每个主题都(可选)分成许多分区。 For each partition some files are maintained by brokers (for index and actual data).对于每个分区,代理维护一些文件(用于索引和实际数据)。

kafka-topics --zookeeper localhost:2181 --describe --topic topic_name

will give you the number of partitions for topic topic_name .将为您提供主题topic_name的分区数。 The default number of partitions per topic num.partitions is defined under /etc/kafka/server.properties每个主题的默认分区数num.partitions/etc/kafka/server.properties/etc/kafka/server.properties

The total number of open files could be very huge if the broker hosts many partitions and a particular partition has many log segment files.如果代理托管许多分区并且特定分区具有许多日志段文件,则打开文件的总数可能非常大。

You can see the current file descriptor limit by running您可以通过运行查看当前的文件描述符限制

ulimit -n

You can also check the number of open files using lsof :您还可以使用lsof检查打开文件的数量:

lsof | wc -l

To solve the issue you either need to change the limit of open file descriptors:要解决此问题,您需要更改打开文件描述符的限制:

ulimit -n <noOfFiles>

or somehow reduce the number of open files (for example, reduce number of partitions per topic).或者以某种方式减少打开文件的数量(例如,减少每个主题的分区数量)。

On Linux distributions using Systemd like RHEL and CentOS, you will need to add lines of configuration in the 2nd block to Systemd Service file.在使用 Systemd 的 Linux 发行版(如 RHEL 和 CentOS)上,您需要将第二个块中的配置行添加到 Systemd 服务文件中。 Changing in the /etc/security/limits.conf is not sufficient.更改 /etc/security/limits.conf 是不够的。

[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target

[Service]
Type=simple
User=kafka
LimitAS=infinity
LimitRSS=infinity
LimitCORE=infinity
LimitNOFILE=65536
ExecStart=/home/kafka/kafka/bin/zookeeper-server-start.sh /home/kafka/kafka/config/zookeeper.properties
ExecStop=/home/kafka/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multi-user.target

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 kafka 无法启动 - 打开的文件太多 - kafka failing to start - too many open files 使用Kafka的应用程序打开文件过多错误 - Too many files open error on app using Kafka Kafka 服务器节点因“打开的文件过多”错误而关闭 - Kafka server node goes down as with “Too many open files” error 长时间运行时打开的文件过多Kafka异常 - Too many open files Kafka Exception on running for long kafka 同步:“java.io.IOException:打开的文件太多” - kafka Synchronization :“java.io.IOException: Too many open files” Kafka 代理节点因“打开的文件太多”错误而宕机 - Kafka broker node goes down with “Too many open files” error Spark Kafka Producer 抛出太多打开的文件异常 - Spark Kafka Producer throwing Too many open files Exception 当您在 Kafka 消费者/客户端上收到“太多打开的文件”时,这意味着什么? - What does it mean when you get "too many open files" on the Kafka Consumer/Client? 升级到2.1后,使用spring-cloud流和kafka的项目中打开的文件过多 - Too many open files on a project using spring-cloud stream and kafka after upgrade to 2.1 java.io.IOException:打开的文件过多,kafka-rest代理 - java.io.IOException: Too many open files kafka-rest proxy
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM