简体   繁体   English

使用Kafka HDFS Connect写入HDFS时出错

[英]Error while writing to HDFS using Kafka HDFS Connect

I am trying to write data in avro format from my Java code to Kafka to HDFS using kafka HDFS connector and I am getting some issues. 我正在尝试使用kafka HDFS连接器将Java代码以 avro格式将数据从Java代码写入Kafka到HDFS ,但出现了一些问题。 When I use the simple schema and data provided on the confluent platform website, I am able to write data to HDFS, but when I try to use complex avro schema, I get this error in the HDFS connector logs: 当我使用融合平台网站上提供的简单架构和数据时,可以将数据写入HDFS,但是当我尝试使用复杂的Avro架构时,在HDFS连接器日志中会出现此错误:

ERROR Task hdfs-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
org.apache.kafka.connect.errors.DataException: Did not find matching union field for data: PROD
    at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:973)
    at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:981)
    at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:981)
    at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:981)
    at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:981)
    at io.confluent.connect.avro.AvroData.toConnectData(AvroData.java:782)
    at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:103)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:346)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

I am using confluent platform 3.0.0 我正在使用融合平台3.0.0

My Java code: 我的Java代码:

Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerUrl);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);
props.put("schema.registry.url", <url>);
// Set any other properties
KafkaProducer producer = new KafkaProducer(props);

Schema schema = new Schema.Parser().parse(new FileInputStream("avsc/schema.avsc"));
DatumReader<Object> reader = new GenericDatumReader<Object>(schema);

InputStream input = new FileInputStream("json/data.json");
DataInputStream din = new DataInputStream(input);
Decoder decoder = DecoderFactory.get().jsonDecoder(schema, din);

Object datum = null;
while (true) {
    try {
        datum = reader.read(null, decoder);
    } catch (EOFException e) {
        break;
    }
}

ProducerRecord<Object, Object> message = new ProducerRecord<Object, Object>(topic, datum);
producer.send(message);
producer.close();

The schema (this is created from avdl file): 模式(从avdl文件创建):

{
  "type" : "record",
  "name" : "RiskMeasureEvent",
  "namespace" : "risk",
  "fields" : [ {
    "name" : "info",
    "type" : {
      "type" : "record",
      "name" : "RiskMeasureInfo",
      "fields" : [ {
        "name" : "source",
        "type" : {
          "type" : "record",
          "name" : "Source",
          "fields" : [ {
            "name" : "app",
            "type" : {
              "type" : "record",
              "name" : "Application",
              "fields" : [ {
                "name" : "csi_id",
                "type" : "string"
              }, {
                "name" : "name",
                "type" : "string"
              } ]
            }
          }, {
            "name" : "env",
            "type" : {
              "type" : "record",
              "name" : "Environment",
              "fields" : [ {
                "name" : "value",
                "type" : [ {
                  "type" : "enum",
                  "name" : "EnvironmentConstants",
                  "symbols" : [ "DEV", "UAT", "PROD" ]
                }, "string" ]
              } ]
            }
          }, ...

The json file: json文件:

{
  "info": {
    "source": {
      "app": {
        "csi_id": "123",
        "name": "ABC"
      },
      "env": {
        "value": {
          "risk.EnvironmentConstants": "PROD"
        }
      }, ...

It seems to be a problem with schema, but I cannot identify the issue. 模式似乎有问题,但我无法确定问题所在。

I'm an engineer for Confluent. 我是Confluent的工程师。 This is a bug in how the Avro Converter handles the union schema you have for env. 这是Avro Converter如何处理环境的联合模式中的错误。 I created issue-393 to address this issue. 我创建了issue-393来解决此问题。 I also put together a pull request with the fix. 我还将修复请求合并到请求中 This should be merged soon. 这应该很快合并。

J Ĵ

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM