简体   繁体   English

Mongodb 主题未看到的 Kafka 消息

[英]Mongodb Kafka messages not seen by topic

I encountered that my topic despite running and operating doesn't register events occuring in my MongoDB.我遇到了我的主题,尽管运行和操作并没有注册我的 MongoDB 中发生的事件。

Everytime I insert/modify record I'm not getting anymore logs from kafka-console-consumer command.每次我插入/修改记录时,我都不再从kafka-console-consumer命令获取日志。

Is there a way clear Kafka's cache/offset maybe?有没有办法清除卡夫卡的缓存/偏移量? Source and sink connection are up and running.源和接收器连接已启动并正在运行。 Entire cluster is also healthy, thing is that everything worked as usual but every couple weeks I see this coming back or when I log into my Mongo cloud from other location.整个集群也很健康,事情是一切都照常工作,但每隔几周我就会看到这种情况再次出现,或者当我从其他位置登录到我的 Mongo 云时。

--partition 0 parameter didn't help, changing retention_ms to 1 too. --partition 0参数没有帮助,将retention_ms也更改为1

在此处输入图像描述

在此处输入图像描述

I checked both connectors' status and got RUNNING :我检查了两个连接器的状态并得到了RUNNING

curl localhost:8083/connectors | jq 在此处输入图像描述

curl localhost:8083/connectors/monit_people/status | jq 在此处输入图像描述

Running docker-compose logs connect I found:运行docker-compose logs connect我发现:

    WARN Failed to resume change stream: Resume of change stream was not possible, as the resume point may no longer be in the oplog. 286

If the resume token is no longer available then there is the potential for data loss.
Saved resume tokens are managed by Kafka and stored with the offset data.
 
When running Connect in standalone mode offsets are configured using the:
`offset.storage.file.filename` configuration.
When running Connect in distributed mode the offsets are stored in a topic.

Use the `kafka-consumer-groups.sh` tool with the `--reset-offsets` flag to reset offsets.

Resetting the offset will allow for the connector to be resume from the latest resume token. 
Using `copy.existing=true` ensures that all data will be outputted by the connector but it will duplicate existing data.
Future releases will support a configurable `errors.tolerance` level for the source connector and make use of the `postBatchResumeToken

Issue requires more practice with Confluent Platform thus for now I re-built entire environment by removing entire container with:问题需要使用 Confluent Platform 进行更多练习,因此现在我通过删除整个容器来重建整个环境:

docker system prune -a -f --volumes

docker container stop $(docker container ls -a -q -f "label=io.confluent.docker") . docker container stop $(docker container ls -a -q -f "label=io.confluent.docker")

After running docker-compose up -d all is up and working.运行docker-compose up -d后,一切正常。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM