[英]Mongodb Kafka messages not seen by topic
我遇到了我的主題,盡管運行和操作並沒有注冊我的 MongoDB 中發生的事件。
每次我插入/修改記錄時,我都不再從kafka-console-consumer
命令獲取日志。
有沒有辦法清除卡夫卡的緩存/偏移量? 源和接收器連接已啟動並正在運行。 整個集群也很健康,事情是一切都照常工作,但每隔幾周我就會看到這種情況再次出現,或者當我從其他位置登錄到我的 Mongo 雲時。
--partition 0
參數沒有幫助,將retention_ms
也更改為1
。
我檢查了兩個連接器的狀態並得到了RUNNING
:
curl localhost:8083/connectors | jq
curl localhost:8083/connectors/monit_people/status | jq
運行docker-compose logs connect
我發現:
WARN Failed to resume change stream: Resume of change stream was not possible, as the resume point may no longer be in the oplog. 286
If the resume token is no longer available then there is the potential for data loss.
Saved resume tokens are managed by Kafka and stored with the offset data.
When running Connect in standalone mode offsets are configured using the:
`offset.storage.file.filename` configuration.
When running Connect in distributed mode the offsets are stored in a topic.
Use the `kafka-consumer-groups.sh` tool with the `--reset-offsets` flag to reset offsets.
Resetting the offset will allow for the connector to be resume from the latest resume token.
Using `copy.existing=true` ensures that all data will be outputted by the connector but it will duplicate existing data.
Future releases will support a configurable `errors.tolerance` level for the source connector and make use of the `postBatchResumeToken
問題需要使用 Confluent Platform 進行更多練習,因此現在我通過刪除整個容器來重建整個環境:
docker system prune -a -f --volumes
docker container stop $(docker container ls -a -q -f "label=io.confluent.docker")
。
運行docker-compose up -d
后,一切正常。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.