简体   繁体   English

Debezium MongoDB 连接器错误:org.apache.kafka.connect.errors.ConnectException:错误处理程序中超出容差

[英]Debezium MongoDB Connector Error: org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler

I am trying to deploy a new Debezium Connector for MongoDB with Transforms.我正在尝试为带有变换的 MongoDB 部署新的 Debezium 连接器。 The configuration looks like this:配置如下所示:

{"name": "mongo_source_connector_autostate",
    "config": {
    "connector.class": "io.debezium.connector.mongodb.MongoDbConnector", 
    "tasks.max":1,
    "initial.sync.max.threads":4,
    "mongodb.hosts": "rs0/FE0VMC1980:27017", 
    "mongodb.name": "mongo", 
    "collection.whitelist": "DASMongoDB.*_AutoState",
    "transforms": "unwrap",
    "transforms.unwrap.type" : "io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope",
    "transforms.sanitize.field.names" : true
    }}

However the connector fails with the following error:但是,连接器失败并出现以下错误:

 org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:290)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:316)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:240)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.util.concurrent.FutureTask.run(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.avro.SchemaParseException: Illegal initial character: 10019_AutoState
        at org.apache.avro.Schema.validateName(Schema.java:1528)
        at org.apache.avro.Schema.access$400(Schema.java:87)
        at org.apache.avro.Schema$Name.<init>(Schema.java:675)
        at org.apache.avro.Schema.createRecord(Schema.java:212)
        at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:893)
        at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:732)
        at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:726)
        at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:365)
        at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:80)
        at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:62)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:290)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
        ... 11 more

I have started the connector in distributed mode with the following configuration:我已使用以下配置以分布式模式启动连接器:

...
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081

internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
...

Note: I have another connector without any transforms.注意:我有另一个没有任何转换的连接器。 It runs just fine.它运行得很好。

I would like to get some help regarding this.我想就此获得一些帮助。 Thanks in advance.提前致谢。

One of your fields seems to be violating the Avro naming rules.您的某个字段似乎违反了 Avro 命名规则。 In your case it seems to be this one:在你的情况下,它似乎是这个:

The name portion of a fullname, record field names, and enum symbols must:全名的名称部分、记录字段名称和枚举符号必须:

  • start with [A-Za-z_][A-Za-z_]开头

But 10019_AutoState violates the rule as it starts with numerical values.但是10019_AutoState违反了规则,因为它以数值开头。 You can change it to something like AutoState10019您可以将其更改为类似AutoState10019


You can view the full list with all the record field naming constraints here .您可以在此处查看包含所有记录字段命名约束的完整列表。

What Debezium version?什么 Debezium 版本? If it is problem with 1.1/1.2 then please raise a Jira issue.如果是 1.1/1.2 的问题,请提出 Jira 问题。 The schema name needs to be sanitized an it seems to me that in this case the error comes from collection name 10019_AutoState and the schema name not sanitizied as it must not start with number.架构名称需要清理,在我看来,在这种情况下,错误来自集合名称10019_AutoState并且架构名称未清理,因为它不能以数字开头。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM