简体   繁体   English

__consumer_offsets 具有非常大分区的主题

[英]__consumer_offsets topic with very big partitions

I am using Kafka 2.0.0 There are some partitions of the __consumer_offsets topic that are 500-700 GB and more than 5000-7000 segments.我正在使用 Kafka 2.0.0 __consumer_offsets 主题的一些分区是 500-700 GB 和超过 5000-7000 个段。 These segments are older than 2-3 months.这些细分市场超过 2-3 个月。 There aren't errors in the logs and that topic is COMPACT as default.日志中没有错误,并且该主题默认为 COMPACT。

What could be the problem?可能是什么问题呢? Maybe a config or a consumer problem?也许是配置或消费者问题? or maybe a bug of kafka 2.0.0?或者可能是 kafka 2.0.0 的错误? What checks could I do?我可以做哪些检查?

My settings:我的设置:

log.cleaner.enable=true
log.cleanup.policy = [delete]
log.retention.bytes = -1
log.segment.bytes = 268435456
log.retention.hours = 72
log.retention.check.interval.ms = 300000



offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600

Try to restart the cluster.尝试重新启动集群。 It will resolve the issue, but it takes lot of time to rebalance because of the size of the topic.它将解决问题,但由于主题的大小,重新平衡需要大量时间。

There can be a Crash in log.cleaner.threads In your brokers.您的代理中的log.cleaner.threads中可能会发生崩溃。 Restart the brokers restarts those treads and cleaning will start.重新启动代理会重新启动这些踏板,并且会开始清理。

And log.cleaner.threads is default to one in kafka.log.cleaner.threads在 kafka 中默认为 1。 Increase it and then if one tread crash, there will be another.增加它,然后如果一个胎面崩溃,就会有另一个。

If this is the case, there should be logs about this in server logs如果是这种情况, server logs

Could be that you have application looping with different consumer groups evey time?可能是您每次都有与不同消费者群体的应用程序循环吗?

You could use this command to look inside your _consumer_offsets topic, try to find out consumer Group names repeats, maybe you have some users creating many consumer groups with loops or running console consumers...您可以使用此命令查看您的 _consumer_offsets 主题,尝试找出重复的消费者组名称,也许您有一些用户创建了许多带有循环的消费者组或运行控制台消费者......

echo "exclude.internal.topics=false" > /tmp/consumer.config
./kafka-console-consumer.sh --consumer.config /tmp/consumer.config \
--formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" \
--bootstrap-server localhost:9092 --topic __consumer_offsets --from-beginning

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM