[英]Kafka data loss, in producer
I have been trying to configure one Kafka broker, one topic, one producer, one consumer. 我一直在尝试配置一个Kafka经纪人,一个主题,一个生产者,一个消费者。 when producer produces , if the Broker goes down, loss of data happens, eg:
当生产者产生时,如果Broker崩溃,则会发生数据丢失,例如:
In Buffer:
Datum 1 - published
Datum 2 - published
.
. ---->(Broker goes down for a while and reconnects...)
.
Datum 4 - published
Datum 5 - published
Properties Configured for Producer are: 为生产者配置的属性是:
bootstrap.servers=localhost:9092
acks=all
retries=1
batch.size=16384
linger.ms=2
buffer.memory=33554432
key.serializer=org.apache.kafka.common.serialization.IntegerSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
producer.type=sync
buffer.size=102400
reconnect.interval=30000
request.required.acks=1
The data size lesser than the configured buffer size.. Help me know where I am going wrong...! 数据大小小于配置的缓冲区大小。。帮助我知道我要去哪里了...!
Not sure what you exactly do. 不知道你到底在做什么。 I would assume that the messages you try to write to Kafka while broker is down are not acked by Kafka.
我假设您在代理关闭时尝试写入Kafka的消息未被Kafka确认。 If a message is not acked, it indicates that the message was not written to Kafka and producer needs to re-try to write the message.
如果未确认消息,则表明该消息未写入Kafka,并且生产者需要重试以写入消息。
The easiest way to do this, is by setting the configuration parameters retries
and retry.backoff.ms
accordingly. 最简单的方法是通过相应地设置配置参数
retries
和retry.backoff.ms
。
At application level, you can also register a Callback
in send(..., Callback)
to get informed about success/failure. 在应用程序级别,您还可以在
send(..., Callback)
注册一个Callback
,以获取有关成功/失败的信息。 In case of failure, you could retry sending by calling send()
again. 如果失败,您可以通过再次调用
send()
重试发送。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.