简体   繁体   中英

How to solve network and memory issues in Kafka brokers?

When using kafka, I got intermittent two network related errors.

1. Error in fetch kafka.server.replicafetcherthread$fetchrequest connection to broker was disconnected before the reponse was read

2. Error in fetch kafka.server.replicafetcherthread$fetchrequest Connection to broker1 (id: 1 rack: null) failed

[configuration environment]

 Brokers: 5 / server.properties: "kafka_manager_heap_s=1g", "kafka_manager_heap_x=1g", "offsets.commit.required.acks=1","offsets.commit.timeout.ms=5000", Most settings are the default. Zookeepers: 3 Servers: 5 Kafka:0.10.1.2 Zookeeper: 3.4.6

Both of these errors are caused by loss of network communication.

If these errors occur, Kafka will work to expand or shrink the ISR partition several times.

expanding-ex) INFO Partition [my-topic,7] on broker 1: Expanding ISR for partition [my-topic,7] from 1,2 to 1,2,3
shrinking-ex) INFO Partition [my-topic,7] on broker 1: Shrinking ISR for partition [my-topic,7] from 1,2,3 to 1,2

I understand that these errors are caused by network problems, but I'm not sure why the break in the network is occurring.

And if this network disconnection persists, I got the following additional error: Error when handling request(topics=null} java.lang.OutOfMemoryError: Java heap space

I wonder what causes these and how can I improve this?

The network error tells you that one of the brokers is not running, which means it cannot connect to it. As per experience the minimum heap size you can assign is 2Gb.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM