简体   繁体   中英

Kafka server node goes down as with “Too many open files” error

We are using 3 node Kafka cluster, with a total of 151 topics with 1 partition for each topic. And we have configured the replication factor=3. While we start kafka brokers getting following error:

ERROR Error while accepting connection (kafka.network.Acceptor)

java.io.IOException: Too many open files

The default value of max. open files is 1024 on most Unix systems. Depending on your throughput you need to configure a much higher value. Try to start with 32768 or higher

Looks like this is due to the less number of file handles.

Can you check the file descriptor limit as below

ulimit -n

Try changing the open file descriptor to a higher value:

ulimit -n <noOfFiles>

You can get the maximum allowed number of open files: cat /proc/sys/fs/file-max

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM