Using the latest kafka and confluent jdbc sink connectors. Sending a really simple Json message:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "int",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": true,
"field": "msg"
}
],
"optional": false,
"name": "msgschema"
},
"payload": {
"id": 222,
"msg": "hi"
}
}
But getting error:
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
Jsonlint says the Json is valid. I have kept json schemas.enable=true
in kafka configuration. Any pointers?
You need to tell Connect that your schema is embedded in the JSON you're using.
You have:
value.converter=org.apache.kafka.connect.json.JsonConverter
But need also:
value.converter.schemas.enable=true
In order to use the JDBC sink, your streamed messages must have a schema. This can be achieved either by using Avro with Schema Registry, or by using JSON with schemas. You might need to delete the topic, re-run sink and then start source side once again if schemas.enable=true
has been configured after initially running the source properties file.
Example:
sink.properties
file
name=sink-mysql
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=test-mysql-jdbc-foobar
connection.url=jdbc:mysql://127.0.0.1:3306/demo?user=user1&password=user1pass
auto.create=true
and an example worker configuration file connect-avro-standalone.properties
:
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
# Local storage file for offset data
offset.storage.file.filename=/tmp/connect.offsets
plugin.path=share/java
and execute
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/sink.properties
I had recently came across same issue and it took multiple retries before i figure it out what was missing:
Following settings worked for me:
key.converter.schemas.enable=false
value.converter.schemas.enable=true
Also, make sure the table exists before in the database and connector should not try attempt creating one. Set auto.create=false
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.