简体   繁体   English

Kafka Connect 堆空间不足

[英]Kafka Connect running out of heap space

After starting Kafka Connect ( connect-standalone ), my task fails immediately after starting with:启动 Kafka Connect ( connect-standalone ) 后,我的任务在启动后立即失败:

java.lang.OutOfMemoryError: Java heap space
    at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
    at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:248)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:316)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:222)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

There's a mention of heap space in some Kafka documentation, telling you to try it with "the default" and only modifying it if there are problems, but there are no instructions to modify the heap space.在一些Kafka文档中提到了堆空间,告诉你用“默认”试试,只有在有问题时才修改它,但是没有修改堆空间的说明。

When you have Kafka problems with当您遇到 Kafka 问题时

java.lang.OutOfMemoryError: Java heap space

it doesn't necessarily mean that it's a memory problem.这并不一定意味着这是一个内存问题。 Several Kafka admin tools like kafka-topics.sh will mask the true error with this when trying to connect to an SSL PORT.尝试连接到 SSL 端口时,一些 Kafka 管理工具(如kafka-topics.sh将掩盖真正的错误。 The true (masked) error is SSL handshake failed !真正的(被屏蔽的)错误是SSL handshake failed

See this issue: https://issues.apache.org/jira/browse/KAFKA-4090看到这个问题: https : //issues.apache.org/jira/browse/KAFKA-4090

The solution is to include a properties file in your command (for kafka-topics.sh this would be --command-config ) and to absolutely include this line:解决方案是在您的命令中包含一个属性文件(对于kafka-topics.sh这将是--command-config )并绝对包含以下行:

security.protocol=SSL

You can control the max and initial heap size by setting the KAFKA_HEAP_OPTS environment variable.您可以通过设置KAFKA_HEAP_OPTS环境变量来控制最大和初始堆大小。

The following example sets a starting size of 512 MB and a maximum size of 1 GB:以下示例设置起始大小为 512 MB,最大大小为 1 GB:

KAFKA_HEAP_OPTS="-Xms512m -Xmx1g" connect-standalone connect-worker.properties connect-s3-sink.properties

When running a Kafka command such as connect-standalone , the kafka-run-class script is invoked, which sets a default heap size of 256 MB in the KAFKA_HEAP_OPTS environment variable if it is not already set.运行 Kafka 命令(例如connect-standalone ,会调用kafka-run-class脚本,如果尚未设置, KAFKA_HEAP_OPTSKAFKA_HEAP_OPTS环境变量中设置默认堆大小为 256 MB

Even I was facing the issue could not start my producer and consumer for a given topic.即使我面临这个问题,也无法为给定的主题启动我的生产者和消费者。 Also deleted all unnecessary log files and topics.Even though that's not related to the issue.还删除了所有不必要的日志文件和主题。即使这与问题无关。

Changing the kafka-run-class.sh did not work for me.更改kafka-run-class.sh对我不起作用。 I changed the below files我更改了以下文件

kafka-console-consumer.sh kafka-console-consumer.sh

kafka-console-producer.sh kafka-console-producer.sh

and stopped getting OOM error.并停止收到 OOM 错误。 Both consumer and producer worked fine after this.在此之后,消费者和生产者都运行良好。

Increased the size to KAFKA_HEAP_OPTS="-Xmx1G" was 512m earlier.将大小增加到KAFKA_HEAP_OPTS="-Xmx1G"是 512m。

I found another cause of this issue this morning.今天早上我发现了这个问题的另一个原因。 I was seeing this same exception except that I'm not using SSL and my messages are very small.我看到了同样的异常,除了我没有使用 SSL 并且我的消息非常小。 The issue in my case turned out to be a misconfigured bootstrap-servers url.在我的案例中,问题是错误配置的bootstrap-servers url。 If you configure that URL to be a server and port that is open but incorrect, you can cause this same exception.如果将该 URL 配置为打开但不正确的服务器和端口,则可能会导致相同的异常。 The Kafka folks are aware of the general issue and are tracking it here: https://cwiki.apache.org/confluence/display/KAFKA/KIP-498%3A+Add+client-side+configuration+for+maximum+response+size+to+protect+against+OOM卡夫卡的人知道一般问题并在此处跟踪它: https : //cwiki.apache.org/confluence/display/KAFKA/KIP-498%3A+Add+client-side+configuration+for+maximum+response +size+to+protect+against+OOM

In my case, using a Spring Boot 2.7.8 application leveraging Spring Boot Kafka auto-configuration (no configuration in Java code), the problem was caused by the security protocol not being set (apparently the default value is PLAINTEXT ).在我的例子中,使用Spring Boot 2.7.8应用程序利用 Spring Boot Kafka 自动配置(Java 代码中没有配置),问题是由未设置安全协议引起的(显然默认值为PLAINTEXT )。 Other errors I got together with java.lang.OutOfMemoryError: Java heap space are:我与java.lang.OutOfMemoryError: Java heap space一起得到的其他错误是:

Stopping container due to an Error
Error while stopping the container: 
Uncaught exception in thread 'kafka-producer-network-thread | producer-':

The solution was to add the following lines to my application.properties :解决方案是将以下行添加到我的application.properties中:

spring.kafka.consumer.security.protocol=SSL
spring.kafka.producer.security.protocol=SSL

My attempt to fix it with just:我试图通过以下方式修复它:

spring.kafka.security.protocol=SSL 

did not work.不工作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM