簡體   English   中英

Java:從文件讀取JSON,轉換為ORC並寫入文件

[英]Java: Read JSON from a file, convert to ORC and write to a file

我需要自動化JSON到ORC的轉換過程。 我使用Apache的ORC-tools包幾乎可以到達那里,除了JsonReader是不處理Map類型並拋出異常 因此,以下方法有效,但不能處理Map類型。

Path hadoopInputPath = new Path(input);
    try (RecordReader recordReader = new JsonReader(hadoopInputPath, schema, hadoopConf)) { // throws when schema contains Map type
        try (Writer writer = OrcFile.createWriter(new Path(output), OrcFile.writerOptions(hadoopConf).setSchema(schema))) {
            VectorizedRowBatch batch = schema.createRowBatch();
            while (recordReader.nextBatch(batch)) {
                writer.addRowBatch(batch);
            }
        }
    }

因此,我開始研究將Hive類用於從Json到ORC的轉換,這具有一個附加的優點,即將來我可以轉換為其他格式,例如對AVRO進行少量代碼更改。 但是,我不確定使用Hive類執行此操作的最佳方法是什么。 具體來說,尚不清楚如何將HCatRecord寫入文件,如下所示。

    HCatRecordSerDe hCatRecordSerDe = new HCatRecordSerDe();
    SerDeUtils.initializeSerDe(hCatRecordSerDe, conf, tblProps, null);

    OrcSerde orcSerde = new OrcSerde();
    SerDeUtils.initializeSerDe(orcSerde, conf, tblProps, null);

    Writable orcOut = orcSerde.serialize(hCatRecord, hCatRecordSerDe.getObjectInspector());
    assertNotNull(orcOut);

    InputStream input = getClass().getClassLoader().getResourceAsStream("test.json.snappy");
    SnappyCodec compressionCodec = new SnappyCodec();
    try (CompressionInputStream inputStream = compressionCodec.createInputStream(input)) {
        LineReader lineReader = new LineReader(new InputStreamReader(inputStream, Charsets.UTF_8));
        String jsonLine = null;
        while ((jsonLine = lineReader.readLine()) != null) {
            Writable jsonWritable = new Text(jsonLine);
            DefaultHCatRecord hCatRecord = (DefaultHCatRecord) jsonSerDe.deserialize(jsonWritable);
            // TODO: Write ORC to file????
        }
    }

我們將不勝感激有關如何完成上述代碼或使用JSON-to-ORC的更簡單方法的任何想法。

這是我根據cricket_007建議最終使用Spark庫完成的工作:

Maven依賴項(為了使Maven-duplicate-finder-plugin感到高興,有一些例外):

    <properties>
        <dep.jackson.version>2.7.9</dep.jackson.version>
        <spark.version>2.2.0</spark.version>
        <scala.binary.version>2.11</scala.binary.version>
    </properties>

    <dependency>
        <groupId>com.fasterxml.jackson.module</groupId>
        <artifactId>jackson-module-scala_${scala.binary.version}</artifactId>
        <version>${dep.jackson.version}</version>
        <exclusions>
            <exclusion>
                <groupId>com.google.guava</groupId>
                <artifactId>guava</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-hive_${scala.binary.version}</artifactId>
        <version>${spark.version}</version>
        <exclusions>
            <exclusion>
                <groupId>log4j</groupId>
                <artifactId>apache-log4j-extras</artifactId>
            </exclusion>
            <exclusion>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-client</artifactId>
            </exclusion>
            <exclusion>
                <groupId>net.java.dev.jets3t</groupId>
                <artifactId>jets3t</artifactId>
            </exclusion>
            <exclusion>
                <groupId>com.google.code.findbugs</groupId>
                <artifactId>jsr305</artifactId>
            </exclusion>
            <exclusion>
                <groupId>stax</groupId>
                <artifactId>stax-api</artifactId>
            </exclusion>
            <exclusion>
                <groupId>org.objenesis</groupId>
                <artifactId>objenesis</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

Java代碼簡介:

SparkConf sparkConf = new SparkConf()
    .setAppName("Converter Service")
    .setMaster("local[*]");

SparkSession sparkSession = SparkSession.builder().config(sparkConf).enableHiveSupport().getOrCreate();

// read input data
Dataset<Row> events = sparkSession.read()
    .format("json")
    .schema(inputConfig.getSchema()) // StructType describing input schema
    .load(inputFile.getPath());

// write data out
DataFrameWriter<Row> frameWriter = events
    .selectExpr(
        // useful if you want to change the schema before writing it to ORC, e.g. ["`col1` as `FirstName`", "`col2` as `LastName`"]
        JavaConversions.asScalaBuffer(outputSchema.getColumns()))
    .write()
    .options(ImmutableMap.of("compression", "zlib"))
    .format("orc")
    .save(outputUri.getPath());

希望這可以幫助某人入門。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM