[英]Java read from json file using Apache Spark specifying the Schema
我有一些具有这种格式的json文件:
{"_t":1480647647,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1480647676,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1483161958,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1483162393,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1483499947,"_p":"rattenbt@test.com","_n":"app_loaded","device_type":"desktop"}
{"_t":1505361824,"_p":"pfitza@test.com","_n":"added_to_team","account":"1234"}
{"_t":1505362047,"_p":"konit@test.com","_n":"added_to_team","account":"1234"}
{"_t":1505362372,"_p":"oechslin@test.com","_n":"added_to_team","account":"1234"}
{"_t":1505362854,"_p":"corrada@test.com","_n":"added_to_team","account":"1234"}
{"_t":1505366071,"_p":"vertigo@test.com","_n":"added_to_team","account":"1234"}
我在Java应用程序中使用Apache Spark来读取此json文件并保存为镶木地板格式。
如果我不使用架构定义,那么文件解析就没有问题。这是我的代码示例:
Dataset<Row> dataset = spark.read().json(pathToFile);
dataset.show(100);
这是我的控制台输出:
+-------------+------------------+----------+-------+-------+-----------+
| _n| _p| _t|account|channel|device_type|
+-------------+------------------+----------+-------+-------+-----------+
| app_loaded| rattenbt@test.com|1480647647| null| null| desktop|
| app_loaded| rattenbt@test.com|1480647676| null| null| desktop|
| app_loaded| rattenbt@test.com|1483161958| null| null| desktop|
| app_loaded| rattenbt@test.com|1483162393| null| null| desktop|
| app_loaded| rattenbt@test.com|1483499947| null| null| desktop|
|added_to_team| pfitza@test.com|1505361824| 1234| null| null|
|added_to_team| konit@test.com|1505362047| 1234| null| null|
...
当我使用这样的架构定义时
StructType schema = new StructType();
schema.add("_n", StringType, true);
schema.add("_p", StringType, true);
schema.add("_t", TimestampType, true);
schema.add("account", StringType, true);
schema.add("channel", StringType, true);
schema.add("device_type", StringType, true);
// Read data from file
Dataset<Row> dataset = spark.read().schema(schema).json(pathToFile);
dataset.show(100);
我得到了控制台输出:
++
||
++
||
||
||
||
...
schma定义有什么问题?
StrutType
是不可变的,因此只需丢弃所有添加项即可。 如果您打印
schema.printTreeString
您会看到它不包含任何字段:
root
您应该使用:
StructType schema = new StructType()
.add("_n", StringType, true)
.add("_p", StringType, true)
.add("_t", TimestampType, true)
.add("account", StringType, true)
.add("channel", StringType, true)
.add("device_type", StringType, true);
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.