[英]Docker Confluent Kafka HDFS Sink Running but Task Failed
I am using the Confluent Kafka all-in-one docker image to setup Kafka on a DigitalOcean droplet. 我正在使用Confluent Kafka 多合一docker映像在DigitalOcean Droplet上设置Kafka。 I am able to successfully run Kafka and add the HDFS connector using the Kafka Connect REST API.
我能够成功运行Kafka并使用Kafka Connect REST API添加HDFS连接器。 I replace HOST_IP with my Cloudera CDH droplet's IP.
我用Cloudera CDH Droplet的IP替换了HOST_IP。
curl -X POST \
-H "Content-Type: application/json" \
--data '{
"name": "hdfs-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "test_hdfs",
"hdfs.url": "hdfs://HOST_IP:8020",
"flush.size": "3",
"name": "hdfs-sink"
}}' \
http://HOST_IP:8083/connectors
Then when I curl Kafka Connect for the hdfs-sink status, I receive the following error in the JSON response under task (the status of the service is running but the task has failed): 然后,当我将Kafka Connect卷曲为hdfs-sink状态时,在任务(服务状态正在运行,但任务失败)下的JSON响应中收到以下错误:
java.lang.RuntimeException: io.confluent.kafka.serializers.subject.TopicNameStrategy is not an instance of io.confluent.kafka.serializers.subject.SubjectNameStrategy
java.lang.RuntimeException:io.confluent.kafka.serializers.subject.TopicNameStrategy不是io.confluent.kafka.serializers.subject.SubjectNameStrategy的实例
UPDATE So I managed to overcome this error by using 5.0.0 rather than the beta (silly me) as recommended by cricket007. 更新因此,我设法通过使用5.0.0而不是cricket007建议的beta(对不起,我)来克服此错误。
However, I'm receiving a different error when I actually attempt to publish data to my HDFS instance. 但是,当我实际尝试将数据发布到我的HDFS实例时,我收到了另一个错误。 I am using the ksql-datagen in order to generate fake data
我正在使用ksql-datagen来生成假数据
docker-compose exec ksql-datagen ksql-datagen quickstart=users format=json topic=test_hdfs maxInterval=1000 \\ propertiesFile=/etc/ksql/datagen.properties bootstrap-server=broker:9092
{
"name": "hdfs-sink",
"connector": {
"state": "RUNNING",
"worker_id": "connect:8083"
},
"tasks": [{
"state": "FAILED",
"trace": "org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:510)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:490)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.connect.errors.DataException: test_hdfs\n\tat io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:97)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:510)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)\n\t... 13 more\nCaused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1\nCaused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!\n",
"id": 0,
"worker_id": "connect:8083"
}],
"type": "sink"
}
EDIT 2 编辑2
Stack Trace for Avro ksql-datagen failing Avro ksql-datagen的堆栈跟踪失败
Outputting 1000000 to test_hdfs
Exception in thread "main" org.apache.kafka.common.errors.SerializationException: Error serializing row to topic test_hdfs using Converter API
Caused by: org.apache.kafka.connect.errors.DataException: test_hdfs
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:77)
at io.confluent.ksql.serde.connect.KsqlConnectSerializer.serialize(KsqlConnectSerializer.java:44)
at io.confluent.ksql.serde.connect.KsqlConnectSerializer.serialize(KsqlConnectSerializer.java:27)
at org.apache.kafka.common.serialization.ExtendedSerializer$Wrapper.serialize(ExtendedSerializer.java:65)
at org.apache.kafka.common.serialization.ExtendedSerializer$Wrapper.serialize(ExtendedSerializer.java:55)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:854)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:816)
at io.confluent.ksql.datagen.DataGenProducer.populateTopic(DataGenProducer.java:94)
at io.confluent.ksql.datagen.DataGen.main(DataGen.java:100)
Caused by: org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309)
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:172)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:229)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:320)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:312)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:307)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:114)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:153)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:79)
at io.confluent.connect.avro.AvroConverter$Serializer.serialize(AvroConverter.java:116)
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:75)
at io.confluent.ksql.serde.connect.KsqlConnectSerializer.serialize(KsqlConnectSerializer.java:44)
at io.confluent.ksql.serde.connect.KsqlConnectSerializer.serialize(KsqlConnectSerializer.java:27)
at org.apache.kafka.common.serialization.ExtendedSerializer$Wrapper.serialize(ExtendedSerializer.java:65)
at org.apache.kafka.common.serialization.ExtendedSerializer$Wrapper.serialize(ExtendedSerializer.java:55)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:854)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:816)
at io.confluent.ksql.datagen.DataGenProducer.populateTopic(DataGenProducer.java:94)
at io.confluent.ksql.datagen.DataGen.main(DataGen.java:100)
EDIT 3 编辑3
Ok so for some reason even though I am generating avro data with ksql-datagen I am still receiving a JSON serialization error on Kafka Connect. 好的,即使出于某种原因,即使我使用ksql-datagen生成avro数据,我仍会在Kafka Connect上收到JSON序列化错误。
docker-compose exec ksql-datagen ksql-datagen schema=/impressions.avro format=avro schemaRegistryUrl=http://schema-registry:8081 key=impressionid topic=test_hdfs maxInterval=1000 \\ propertiesFile=/etc/ksql/datagen.properties bootstrap-server=broker:9092
curl -X POST \
-H "Content-Type: application/json" \
--data '{
"name": "hdfs-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"format.class": "io.confluent.connect.hdfs.avro.AvroFormat",
"tasks.max": "1",
"schema.compatibility": "FULL",
"topics": "test_hdfs",
"hdfs.url": "hdfs://cdh.nuvo.app:8020",
"flush.size": "3",
"name": "hdfs-sink"
}}' \
http://kafka.nuvo.app:8083/connectors
Schema Registry Config 架构注册表配置
# Bootstrap Kafka servers. If multiple servers are specified, they should be comma-separated.
bootstrap.servers=localhost:9092
# The converters specify the format of data in Kafka and how to translate it into Connect data.
# Every Connect user will need to configure these based on the format they want their data in
# when loaded from or stored into Kafka
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
# The internal converter used for offsets and config data is configurable and must be specified,
# but most users will always want to use the built-in default. Offset and config data is never
# visible outside of Connect in this format.
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
Kafka Connect Log: Kafka Connect日志:
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:510)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:490)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:334)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:510)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 13 more
Caused by: org.apache.kafka.common.errors.SerializationException: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'impression_816': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"impression_816"; line: 1, column: 29]
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'impression_816': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"impression_816"; line: 1, column: 29]
EDIT 4 编辑4
[2018-08-22 02:05:51,140] ERROR WorkerSinkTask{id=hdfs-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:510)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:490)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: test_hdfs1
at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:97)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:510)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 13 more
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
[2018-08-22 02:05:51,141] ERROR WorkerSinkTask{id=hdfs-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
[2018-08-22 02:05:51,243] INFO Publish thread interrupted for client_id=consumer-8 client_type=CONSUMER session= cluster=lUWD_PR0RsiTkaunoUrUfA group=connect-hdfs-sink (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor)
You set ksql-datagen ... format=json
您设置
ksql-datagen ... format=json
But the error indicates you have setup the AvroConverter in Kafka Connect 但是错误表明您已经在Kafka Connect中设置了AvroConverter
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
Look at your Compose file... 查看您的撰写文件...
CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
If you want to produce Avro data instead refer to the ksql-datagen
docs . 如果你想制作Avro的数据,而不是 指
ksql-datagen
文档 。
And although you are producing JSON, currently, that's not what will be put on HDFS with your configuration. 而且,尽管您正在生成JSON,但目前,配置中不会将其放到HDFS上。
Avro is the default output format for HDFS Connect; Avro是HDFS Connect的默认输出格式 。 if you refer to the configuration documentation .
如果您参考配置文档 。
format.class
The format class to use when writing data to the store.将数据写入存储时使用的格式类。 Format classes implement the
io.confluent.connect.storage.format.Format
interface.格式类实现
io.confluent.connect.storage.format.Format
接口。Type: class
类型:类
Default:io.confluent.connect.hdfs.avro.AvroFormat
默认值:
io.confluent.connect.hdfs.avro.AvroFormat
Importance: high重要性:高
These classes are available by default:
这些类默认情况下可用:
io.confluent.connect.hdfs.avro.AvroFormat
io.confluent.connect.hdfs.json.JsonFormat
io.confluent.connect.hdfs.parquet.ParquetFormat
io.confluent.connect.hdfs.string.StringFormat
If you don't use JsonFormat, I believe in order to output Avro from JSON you need a JSON record that looks like so 如果您不使用JsonFormat,我相信为了从JSON输出Avro, 您需要一个如下所示的JSON记录
{
"schema": {...}
"payload": {...}
}
Otherwise, an Avro schema cannot be inferred from a JSON record. 否则,无法从JSON记录中推断Avro模式。
Through your series of edits I think you switched to producing Avro, but using JsonConverter based on what I mentioned above, which isn't what I was suggesting. 通过您的一系列编辑,我认为您转而制作了Avro,但基于上面提到的内容使用了JsonConverter,这并不是我的建议。 Basically, the Converter class type must match the producer data and defines the consumer deserializer
基本上,Converter类类型必须与生产者数据匹配并定义消费者反序列化器
For the serialization error with id -1, it's basically saying that the data either in the key or the value is not Avro. 对于ID为-1的序列化错误,基本上是说密钥或值中的数据不是Avro。 Now, KSQL doesn't quite work with Avro keys, so I'd wager its the key deserializer that's failing.
现在,KSQL不能与Avro密钥一起使用,因此我敢打赌它的密钥反序列化器会失败。 To address that issue, set
要解决该问题,请设置
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.