[英]How to Split a large parquet file to multiple parquet and save in different hadoop path by time column
My sparquet file like this 我的贴布文件像这样
1, a, 1980-09-08 1,a,1980-09-08
2, b, 1980-09-08 2,b,1980-09-08
3, c, 2017-09-09 3,c,2017-09-09
Hope the output file like this 希望这样的输出文件
the folder 19800908
contains data 文件夹19800908
包含数据
1, a, 1980-09-08 1,a,1980-09-08
2, b, 1980-09-08 2,b,1980-09-08
and the folder 20170909
contains data 并且文件夹20170909
包含数据
3, c, 2017-09-09 3,c,2017-09-09
I know can groupBy key date
but don't know how to output multiple parquet file use such class MultipleTextOutputFormat
我知道可以按关键date
分组,但不知道如何使用此类MultipleTextOutputFormat
输出多个实木复合地板文件
I don't want to foreach loop the keys, which to slow and need a lot of memory 我不想让foreach循环键,这会减慢速度并需要大量内存
now the code like this 现在这样的代码
val input = sqlContext.read.parquet(sourcePath)
.persist(StorageLevel.DISK_ONLY)
val keyRows: RDD[(Long, Row)] =
input.mapPartitions { partition =>
partition.flatMap { row =>
val key = format.format(row.getDate(3)).toLong
Option((key, row))
}
}.persist(StorageLevel.DISK_ONLY)
val keys = keyRows.keys.distinct().collect()
for (key <- keys) {
val rows = keyRows.filter { case (_key, _) => _key == key }.map(_._2)
val df = sqlContext.createDataFrame(rows, input.schema)
val path = s"${outputPrefix}/$key"
HDFSUtils.deleteIfExist(path)
df.write.parquet(path)
}
If I use the MultipleTextOutputFormat the output as follows which not I want 如果我使用MultipleTextOutputFormat,则输出如下,我不需要
keyRows.groupByKey()
.saveAsHadoopFile(conf.getOutputPrefixDirectory, classOf[String], classOf[String],
classOf[SimpleMultipleTextOutputFormat[_, _]])
public class SimpleMultipleTextOutputFormat<A, B> extends MultipleTextOutputFormat<A, B> {
@Override
protected String generateFileNameForKeyValue(A key, B value, String name) {
// return super.generateFileNameForKeyValue(key, value, name);
return key.toString();
}
}
Writing with partitioned column can be used: 可以使用分区列进行写操作:
df.write.partitionBy("dateString").parquet("/path/to/file").
Difference - folder name will be like "dateString=2017-09-09", and new string column "dateString" have to be created before saving. 区别-文件夹名称将类似于“ dateString = 2017-09-09”,并且必须在保存之前创建新的字符串列“ dateString”。
from this post spark partition data writing by timestamp 从这个帖子火花分区数据写入时间戳
input
.withColumn("_key", date_format(col(partitionField), format.toPattern))
.write
.partitionBy("_key")
.parquet(conf.getOutputPrefixDirectory)
But how to remove the folder name '_ke=' 但是如何删除文件夹名称“ _ke =”
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.