简体   繁体   English

长时间运行时打开的文件过多Kafka异常

[英]Too many open files Kafka Exception on running for long

I have a Kafka producer code in Java that watches a directory for new files using java nio WatchService api and takes any new file and pushes to a kafka topic. 我在Java中有一个Kafka生产者代码,它使用java nio WatchService api监视目录中的新文件,并获取任何新文件并将其推送到kafka主题。 Spark streaming consumer reads from the kafka topic. Spark流媒体消费者阅读kafka主题。 I am getting the following error after the Kafka producer job keeps running for a day. Kafka生产者作业持续运行一天后,出现以下错误。 The producer pushes about 500 files every 2 mins. 生产者每2分钟推送约500个文件。 My Kafka topic has 1 partition and 2 replication factor. 我的Kafka主题有1个分区和2个复制因子。 Can someone please help? 有人可以帮忙吗?

org.apache.kafka.common.KafkaException: Failed to construct kafka producer         
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:342) 
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:166) 
    at com.hp.hawkeye.HawkeyeKafkaProducer.Sender.createProducer(Sender.java:60) 
    at com.hp.hawkeye.HawkeyeKafkaProducer.Sender.<init>(Sender.java:38)   
    at com.hp.hawkeye.HawkeyeKafkaProducer.HawkeyeKafkaProducer.<init>(HawkeyeKafkaProducer.java:54) 
    at com.hp.hawkeye.HawkeyeKafkaProducer.myKafkaTestJob.main(myKafkaTestJob.java:81)

Caused by: org.apache.kafka.common.KafkaException: java.io.IOException: Too many open files
    at org.apache.kafka.common.network.Selector.<init>(Selector.java:125)
    at org.apache.kafka.common.network.Selector.<init>(Selector.java:147)  
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:306)

... 7 more 
Caused by: java.io.IOException: Too many open files         
     at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)         
     at sun.nio.ch.EPollArrayWrapper.<init>(EPollArrayWrapper.java:130)        
     at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:69)      
     at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36) 
     at java.nio.channels.Selector.open(Selector.java:227)         
     at org.apache.kafka.common.network.Selector.<init>(Selector.java:123)     
 ... 9 more

Check ulimit -aH 检查ulimit -aH

check with your admin and increase the open files size, for eg: 与您的管理员联系并增加打开文件的大小,例如:

open files                      (-n) 655536

else I suspect there might be leaks in your code, refer: 否则我怀疑您的代码中可能存在泄漏,请参阅:

http://mail-archives.apache.org/mod_mbox/spark-user/201504.mbox/%3CCAKWX9VVJZObU9omOVCfPaJ_bPAJWiHcxeE7RyeqxUHPWvfj7WA@mail.gmail.com%3E http://mail-archives.apache.org/mod_mbox/spark-user/201504.mbox/%3CCAKWX9VVJZObU9omOVCfPaJ_bPAJWiHcxeE7RyeqxUHPWvfj7WA@mail.gmail.com%3E

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM