[英]Is there any way to use MongoSourceConnector for multiple database with single kafka topic?
I am using MongoSourceConnector to connect kafka topic with mongo database collection.我正在使用 MongoSourceConnector 将 kafka 主题与 mongo 数据库集合连接起来。 For single database with single kafka topic it's working fine, but is there any way that i could do a connection for multiple mongo database with single kafka topic.
对于具有单个 kafka 主题的单个数据库,它工作正常,但是有什么方法可以为具有单个 kafka 主题的多个 mongo 数据库建立连接。
If you are running kafka-connect in distributed mode then you can create a another connector config file with the above mentioned config如果您在分布式模式下运行 kafka-connect,那么您可以使用上述配置创建另一个连接器配置文件
I am not really sure about multiple databases and a single Kafka topic but you can surely listen to multiple databases change streams and push data to topics.我不太确定多个数据库和单个 Kafka 主题,但您肯定可以听到多个数据库更改流并将数据推送到主题。 Since topic creation depends on the
database_name.collection_name
, so you will have more topics.由于主题创建取决于
database_name.collection_name
,因此您将拥有更多主题。
You can provide the Regex to listen to multiple databases in the pipeline
.您可以提供正则表达式来侦听
pipeline
中的多个数据库。
"pipeline": "[{\"$match\":{\"$and\":[{\"ns.db\":{\"$regex\":/^database-names_.*/}},{\"ns.coll\":{\"$regex\":/^collection_name$/}}]}}]"
Here is the complete Kafka connector configuration.这是完整的 Kafka 连接器配置。
Mongo to Kafka source connector Mongo 到 Kafka 源连接器
{
"name": "mongo-to-kafka-connect",
"config": {
"connector.class": "com.mongodb.kafka.connect.MongoSourceConnector",
"publish.full.document.only": "true",
"tasks.max": "3",
"key.converter.schemas.enable": "false",
"topic.creation.enable": "true",
"poll.await.time.ms": 1000,
"poll.max.batch.size": 100,
"topic.prefix": "any prefix for topic name",
"output.json.formatter": "com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson",
"connection.uri": "mongodb://<username>:<password>@ip:27017,ip:27017,ip:27017,ip:27017/?authSource=admin&replicaSet=xyz&tls=true",
"value.converter.schemas.enable": "false",
"copy.existing": "true",
"topic.creation.default.replication.factor": 3,
"topic.creation.default.partitions": 3,
"topic.creation.compacted.cleanup.policy": "compact",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"mongo.errors.log.enable": "true",
"heartbeat.interval.ms": 10000,
"pipeline": "[{\"$match\":{\"$and\":[{\"ns.db\":{\"$regex\":/^database-names_.*/}},{\"ns.coll\":{\"$regex\":/^collection_name$/}}]}}]"
}
}
You can get more details from official docs.您可以从官方文档中获得更多详细信息。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.