简体   繁体   English

Kafka 服务器节点因“打开的文件过多”错误而关闭

[英]Kafka server node goes down as with “Too many open files” error

We are using 3 node Kafka cluster, with a total of 151 topics with 1 partition for each topic.我们使用 3 节点 Kafka 集群,共有 151 个主题,每个主题有 1 个分区。 And we have configured the replication factor=3.我们已经配置了复制因子=3。 While we start kafka brokers getting following error:当我们启动 kafka 经纪人时,出现以下错误:

ERROR Error while accepting connection (kafka.network.Acceptor)

java.io.IOException: Too many open files

The default value of max.最大值的默认值。 open files is 1024 on most Unix systems.在大多数 Unix 系统上,打开的文件是 1024。 Depending on your throughput you need to configure a much higher value.根据您的吞吐量,您需要配置更高的值。 Try to start with 32768 or higher尝试从 32768 或更高版本开始

Looks like this is due to the less number of file handles.看起来这是由于文件句柄的数量较少。

Can you check the file descriptor limit as below您可以检查文件描述符限制,如下所示

ulimit -n

Try changing the open file descriptor to a higher value:尝试将打开的文件描述符更改为更高的值:

ulimit -n <noOfFiles>

You can get the maximum allowed number of open files: cat /proc/sys/fs/file-max可以获得最大允许打开文件数: cat /proc/sys/fs/file-max

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM