简体   繁体   中英

confluent kafka avro producer schema error

I am using the example code from https://github.com/confluentinc/confluent-kafka-python/blob/master/examples/avro_producer.py to load data onto a topic. Only one change I have done and that is I have added "default": null to each field for schema compatibility. It gets loaded fine as I can see the message and schema in http://localhost:9021/. I also am able to see the data coming into the topic if I run the kafka-avro-console-consumer command via cli. But trying to use redshift sink with configuration as provided in https://docs.confluent.io/current/connect/kafka-connect-aws-redshift/index.html , I get the following error as shown below. However, if I don't add "default": null in the fields, then it goes till the end all fine. Any guidance would be much appreciated.

org.apache.kafka.connect.errors.SchemaBuilderException: Invalid default value
    at org.apache.kafka.connect.data.SchemaBuilder.defaultValue(SchemaBuilder.java:131)
    at io.confluent.connect.avro.AvroData.toConnectSchema(AvroData.java:1812)
    at io.confluent.connect.avro.AvroData.toConnectSchema(AvroData.java:1567)
    at io.confluent.connect.avro.AvroData.toConnectSchema(AvroData.java:1687)
    at io.confluent.connect.avro.AvroData.toConnectSchema(AvroData.java:1543)
    at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:1226)
    at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:108)
    at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:491)
    at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
    at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
    at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:491)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:468)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:324)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:200)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.connect.errors.DataException: Invalid value: null used for required field: "null", schema type: INT32
    at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:220)
    at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:213)
    at org.apache.kafka.connect.data.SchemaBuilder.defaultValue(SchemaBuilder.java:129)

It's not enough to add "default: null", you need to amend the type to be something like:

type: ["null", "string"], default: null

taking care to add "null" to the type in the first position, ie. not

type: ["string", "null"], default: null

See discussion at: http://apache-avro.679487.n3.nabble.com/How-to-declare-an-optional-field-tp4025089p4025094.html

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM