简体   繁体   English

卡夫卡:消费者崩溃

[英]Kafka: Consumer Crashing

I inherited some Kafka code that I'm implementing into another project and came across an issue... After the consumer receives 3995 messages from the producer, it crashes and gives the following error: 我继承了一些正在实施到另一个项目中的Kafka代码,并遇到了一个问题...使用者从生产者那里收到3995条消息后,它崩溃并给出以下错误:

ERROR Error while accepting connection (kafka.network.Acceptor) 
java.io.IOException: Too many open files

Information about data being sent:
Very bursty around the time of crash
Always crashes at 3995

I am running it on a Centos Virtual Machine, I've ran other smaller data sets through it with ease. 我正在Centos虚拟机上运行它,我已经轻松地通过它运行了其他较小的数据集。 Thanks for your time! 谢谢你的时间!

"Too many open files" can you type 'lsof | 您可以输入“ lsof |“打开的文件太多”吗? wc -l' in your linux to know how many files are opened. linux中的wc -l'可以知道打开了多少文件。

Follow the guide to increase number files opened: 按照指南增加打开的文件数:

The Number Of Maximum Files Was Reached, How Do I Fix This Problem? 已达到最大文件数,如何解决此问题? Many application such as Oracle database or Apache web server needs this range quite higher. 许多应用程序,例如Oracle数据库或Apache Web服务器,都需要更高的范围。 So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root): 因此,可以通过如下所述在内核变量/ proc / sys / fs / file-max中设置一个新值来增加打开文件的最大数量(以root用户身份登录):

sysctl -w fs.file-max=100000

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM