简体   繁体   English

kafka 无法启动 - 打开的文件太多

[英]kafka failing to start - too many open files

I am trying to start the Kafka service but it is giving the below error.我正在尝试启动 Kafka 服务,但出现以下错误。

在此处输入图像描述

● kafka.service - Apache Kafka server (broker)    Loaded: loaded (/etc/systemd/system/kafka.service; enabled; vendor preset: enabled)   Active: failed (Result: exit-code) since Wed 2021-11-17 06:16:12 UTC; 46min ago
     Docs: http://kafka.apache.org/documentation.html   Process: 10979 ExecStop=/opt/deployments/commoninfra/kafka/bin/kafka-server-stop.sh (code=exited, status=1/FAILURE)   Process: 10409 ExecStart=/opt/deployments/commoninfra/kafka/bin/kafka-server-start.sh /opt/deployments/commoninfra/kafka/config/server.properties  Main PID: 10409 (code=exited, status=1/FAILURE)

Nov 17 06:16:11 atl-kafka2 kafka-server-start.sh[10409]:         at kafka.network.Acceptor.accept(SocketServer.scala:642) Nov 17 06:16:11 atl-kafka2 kafka-server-start.sh[10409]:         at kafka.network.Acceptor.run(SocketServer.scala:571) Nov 17 06:16:11 atl-kafka2 kafka-server-start.sh[10409]:         at java.lang.Thread.run(Thread.java:748) Nov 17 06:16:11 atl-kafka2 kafka-server-start.sh[10409]: [2021-11-17 06:16:11,519] ERROR Error while accepting connection (kafka.network.Acceptor) Nov 17 06:16:11 atl-kafka2 kafka-server-start.sh[10409]: java.io.IOException: Too many open files Nov 17 06:16:12 atl-kafka2 systemd[1]: kafka.service: Main process exited, code=exited, status=1/FAILURE Nov 17 06:16:12 atl-kafka2 kafka-server-stop.sh[10979]: No kafka server to stop Nov 17 06:16:12 atl-kafka2 systemd[1]: kafka.service: Control process exited, code=exited status=1 Nov 17 06:16:12 atl-kafka2 systemd[1]: kafka.service: Unit entered failed state. Nov 17 06:16:12 atl-kafka2 systemd[1]: kafka.service: Failed with result 'exit-code'. 

How to check the below points Find out from which microservice there are many connections to Kafka A total number of established and waiting connections.如何检查以下几点 找出从哪个微服务到 Kafka 有很多连接 已建立和等待连接的总数。

we tried to quick-fix by increasing ulimit but we are facing this issue every day.我们试图通过增加 ulimit 来快速修复,但我们每天都面临这个问题。 Need permanent solution需要永久解决方案

To debug the issue, try issuing lsof on the Kafka broker process to get the list of files that kafka broker has opened.要调试该问题,请尝试在 Kafka 代理进程上发出lsof以获取 Kafka 代理已打开的文件列表。 Topic partitions are folders.主题分区是文件夹。

Check your segment.ms or segment.bytes to see if Kafka is set to rollover new segments frequently which is ending up in creating a lot of files.检查您的segment.mssegment.bytes以查看 Kafka 是否设置为频繁翻转新段,这最终会导致创建大量文件。

You may also want to consider adding new brokers if needed .如果需要,您可能还想考虑添加新的经纪人。

Ensure that you are using Kafka 2.1.1+.确保您使用的是 Kafka 2.1.1+。 Check out this ticket https://issues.apache.org/jira/browse/KAFKA-7697查看这张票https://issues.apache.org/jira/browse/KAFKA-7697

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM