简体   繁体   English

Java 堆空间 - 超出 memory 错误 - 带有 SASL_SSL 的 Kafka 代理

[英]Java heap space - Out of memory error - Kafka Broker with SASL_SSL

when I use the below "/usr/bin/kafka-delete-records" command in the Kafka broker with PLAIN_TEXT port 9092, the command works fine, but when I use the SASL_SSL port 9094, the command throws the below error.当我在具有 PLAIN_TEXT 端口 9092 的 Kafka 代理中使用以下“/usr/bin/kafka-delete-records”命令时,该命令工作正常,但是当我使用 SASL_SSL 端口 9094 时,该命令会引发以下错误。 Anyone know the solution to use the Kafka broker port 9094 with SASL_SSL?任何人都知道将 Kafka 代理端口 9094 与 SASL_SSL 一起使用的解决方案吗?

$ssh **** ****@<IP address> /usr/bin/kafka-delete-records --bootstrap-server localhost:9094 --offset-json-file /kafka/records.json`

[2019-10-14 04:15:49,891] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1': (org.apache.kafka.common.utils.KafkaThread)

java.lang.OutOfMemoryError: Java heap space
    at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
    at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
    at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:390)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:351)
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1125)
    at java.lang.Thread.run(Thread.java:748)
Executing records delete operation
Records delete operation completed:

NOTE: -Xmx gave as 8GB, also total memory of the server is 16 GB注意:-Xmx 为 8GB,服务器的总 memory 为 16 GB

please check the current Heap value below:请检查下面的当前堆值:

$ ps -ef | grep kafka
cp-kafka 11419     1  3 10:07 ?        00:05:27 java -Xms8g -Xmx8g  -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35  ........ io.confluent.support.metrics.SupportedKafka /etc/kafka/server.properties

Most likely, OOM exception is just a red herring, see JIRA KAFKA-4493 .最有可能的是,OOM 异常只是一个红鲱鱼,请参阅 JIRA KAFKA-4493 And the real issue is a SASL-SSL connection, which your client is unable to establish properly.真正的问题是您的客户端无法正确建立的 SASL-SSL 连接。 Enable SSL debug on the client side and proceed from there:在客户端启用 SSL 调试并从那里继续:

$ export KAFKA_OPTS="-Djavax.net.debug=handshake"
$ /usr/bin/kafka-delete-records ...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM