简体   繁体   English

Kafka Connect:没有为连接器创建任务

[英]Kafka Connect: No tasks created for a connector

We are running Kafka Connect (Confluent Platform 5.4, ie. Kafka 2.4) in a distributed mode using Debezium (MongoDB) and Confluent S3 connectors.我们正在使用 Debezium (MongoDB) 和 Confluent S3 连接器以分布式模式运行 Kafka Connect(Confluent Platform 5.4,即 Kafka 2.4)。 When adding a new connector via the REST API the connector is created in RUNNING state, but no tasks are created for the connector.通过 REST API 添加新连接器时,连接器在 RUNNING state 中创建,但没有为连接器创建任何任务。

Pausing and resuming the connector does not help.暂停和恢复连接器没有帮助。 When we stop all workers and then start them again, the tasks are created and everything runs as it should.当我们停止所有 worker 然后再次启动它们时,任务将被创建并且一切都会按预期运行。

The issue is not caused by the connector plugins, because we see the same behaviour for both Debezium and S3 connectors.该问题不是由连接器插件引起的,因为我们看到 Debezium 和 S3 连接器的行为相同。 Also in debug logs I can see that Debezium is correctly returning a task configuration from the Connector.taskConfigs() method.同样在调试日志中,我可以看到 Debezium 正确地从 Connector.taskConfigs() 方法返回任务配置。

Can somebody tell me what to do se we can add connectors without restarting the workers?有人可以告诉我该怎么做我们可以在不重新启动工作人员的情况下添加连接器吗? Thanks.谢谢。

Configuration details配置详情

The cluster has 3 nodes with the following connect-distributed.properties :该集群有 3 个节点,具有以下connect-distributed.properties

bootstrap.servers=kafka-broker-001:9092,kafka-broker-002:9092,kafka-broker-003:9092,kafka-broker-004:9092
group.id=tdp-QA-connect-cluster

key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false

internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false

offset.storage.topic=connect-offsets-qa
offset.storage.replication.factor=3
offset.storage.partitions=5

config.storage.topic=connect-configs-qa
config.storage.replication.factor=3

status.storage.topic=connect-status-qa
status.storage.replication.factor=3
status.storage.partitions=3

offset.flush.interval.ms=10000

rest.host.name=tdp-QA-kafka-connect-001
rest.port=10083
rest.advertised.host.name=tdp-QA-kafka-connect-001
rest.advertised.port=10083

plugin.path=/opt/kafka-connect/plugins,/usr/share/java/

security.protocol=SSL
ssl.truststore.location=/etc/kafka/ssl/kafka-connect.truststore.jks
ssl.truststore.password=<secret>
ssl.endpoint.identification.algorithm=
producer.security.protocol=SSL
producer.ssl.truststore.location=/etc/kafka/ssl/kafka-connect.truststore.jks
producer.ssl.truststore.password=<secret>
consumer.security.protocol=SSL
consumer.ssl.truststore.location=/etc/kafka/ssl/kafka-connect.truststore.jks
consumer.ssl.truststore.password=<secret>

max.request.size=20000000
max.partition.fetch.bytes=20000000

The connectors configuration连接器配置

Debezium example:例子:

{
  "name": "qa-mongodb-comp-converter-task|1",
  "config": {
    "connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
    "mongodb.hosts": "mongodb-qa-001:27017,mongodb-qa-002:27017,mongodb-qa-003:27017",
    "mongodb.name": "qa-debezium-comp",
    "mongodb.ssl.enabled": true,
    "collection.whitelist": "converter[.]task",
    "tombstones.on.delete": true
  }
}

S3 example: S3 示例:

{
  "name": "qa-s3-sink-task|1",
  "config": {
    "connector.class": "io.confluent.connect.s3.S3SinkConnector",
    "topics": "qa-debezium-comp.converter.task",
    "topics.dir": "data/env/qa",
    "s3.region": "eu-west-1",
    "s3.bucket.name": "<bucket-name>",
    "flush.size": "15000",
    "rotate.interval.ms": "3600000",
    "storage.class": "io.confluent.connect.s3.storage.S3Storage",
    "format.class": "custom.kafka.connect.s3.format.plaintext.PlaintextFormat",
    "schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
    "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
    "schema.compatibility": "NONE",
    "key.converter": "org.apache.kafka.connect.json.JsonConverter",
    "value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "key.converter.schemas.enable": false,
    "value.converter.schemas.enable": false,
    "transforms": "ExtractDocument",
    "transforms.ExtractDocument.type":"custom.kafka.connect.transforms.ExtractDocument$Value"
  }
}

The connectors are created using curl: curl -X POST -H "Content-Type: application/json" --data @<json_file> http:/<connect_host>:10083/connectors使用 curl 创建连接器: curl -X POST -H "Content-Type: application/json" --data @<json_file> http:/<connect_host>:10083/connectors

我遇到了同样的问题,所以我更改了连接器的名称并创建了一个新的连接器,它起作用了,但我不知道这个问题的根源,因为我们在 kafka-connect 日志中没有信息。

Delete the connector and create it again.删除连接器并重新创建它。 Repeat this process until task(s) show up.重复此过程,直到任务出现。

It worked for me after 6-7 trials, not sure why.经过 6-7 次试验后,它对我有用,不知道为什么。 Pausing and resuming, restarting the connector/tasks did not help me.暂停和恢复,重新启动连接器/任务对我没有帮助。

I got empty tasks when deploying a different connector at Tasks are empty after deploying ElasticsearchSinkConnector 在部署 ElasticsearchSinkConnector 后任务为空时部署不同的连接器时我得到空tasks

Adding these two to the config when deploy the connector will help locate the issue about why the task failed.在部署连接器时将这两个添加到config中将有助于定位有关任务失败原因的问题。

        "errors.log.include.messages": "true",
        "errors.log.enable": "true"

In my case, instead of empty tasks , it will show why it failed:在我的例子中,它将显示失败的原因,而不是空tasks

GET /connectors/elasticsearch-sink/status GET /connectors/elasticsearch-sink/状态

{
    "name": "elasticsearch-sink",
    "connector": {
        "state": "RUNNING",
        "worker_id": "10.xxx.xxx.xxx:8083"
    },
    "tasks": [
        {
            "id": 0,
            "state": "FAILED",
            "worker_id": "10.xxx.xxx.xxx:8083",
            "trace": "org.apache.kafka.common.errors.GroupAuthorizationException: Not authorized to access group: connect-elasticsearch-sink\n"
        }
    ],
    "type": "sink"
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM