簡體   English   中英

Java使用Apache Spark從json文件讀取並指定了模式

[英]Java read from json file using Apache Spark specifying the Schema

我有一些具有這種格式的json文件:

{"_t":1480647647,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1480647676,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1483161958,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1483162393,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1483499947,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1505361824,"_p":"pfitza@test.com","_n":"added_to_team","account":"1234"}
{"_t":1505362047,"_p":"konit@test.com","_n":"added_to_team","account":"1234"}
{"_t":1505362372,"_p":"oechslin@test.com","_n":"added_to_team","account":"1234"}
{"_t":1505362854,"_p":"corrada@test.com","_n":"added_to_team","account":"1234"}
{"_t":1505366071,"_p":"vertigo@test.com","_n":"added_to_team","account":"1234"}

我在Java應用程序中使用Apache Spark來讀取此json文件並保存為鑲木地板格式。

如果我不使用架構定義,那么文件解析就沒有問題。這是我的代碼示例:

Dataset<Row> dataset = spark.read().json(pathToFile);
dataset.show(100);

這是我的控制台輸出:

+-------------+------------------+----------+-------+-------+-----------+
|           _n|                _p|        _t|account|channel|device_type|
+-------------+------------------+----------+-------+-------+-----------+
|   app_loaded| rattenbt@test.com|1480647647|   null|   null|    desktop|
|   app_loaded| rattenbt@test.com|1480647676|   null|   null|    desktop|
|   app_loaded| rattenbt@test.com|1483161958|   null|   null|    desktop|
|   app_loaded| rattenbt@test.com|1483162393|   null|   null|    desktop|
|   app_loaded| rattenbt@test.com|1483499947|   null|   null|    desktop|
|added_to_team|   pfitza@test.com|1505361824|   1234|   null|       null|
|added_to_team|    konit@test.com|1505362047|   1234|   null|       null|
...

當我使用這樣的架構定義時

StructType schema = new StructType();
schema.add("_n", StringType, true);
schema.add("_p", StringType, true);
schema.add("_t", TimestampType, true);
schema.add("account", StringType, true);
schema.add("channel", StringType, true);
schema.add("device_type", StringType, true);
// Read data from file
Dataset<Row> dataset = spark.read().schema(schema).json(pathToFile);
dataset.show(100);

我得到了控制台輸出:

++
||
++
||
||
||
||
...

schma定義有什么問題?

StrutType是不可變的,因此只需丟棄所有添加項即可。 如果您打印

schema.printTreeString

您會看到它不包含任何字段:

root

您應該使用:

StructType schema = new StructType()
  .add("_n", StringType, true)
  .add("_p", StringType, true)
  .add("_t", TimestampType, true)
  .add("account", StringType, true)
  .add("channel", StringType, true)
  .add("device_type", StringType, true);

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM