繁体   English   中英

如何根据spark-scala中的唯一列将数据帧保存到多个文件中

[英]How to save a dataframe into multiple files based on unique columns in spark-scala

我有需要根据列来源目的地划分的inputDf并将每个唯一组合保存到不同的 csv 文件中。

(使用 Spark 2.4.4)

val spark: SparkSession = SparkSession.builder().appName("Test").getOrCreate()

val inputRdd: RDD[(String, String, String, String, String, String)] = spark.sparkContext.parallelize(Seq(
  ("City1", "City2", "Sedan", "AE1235", "80", "2020-02-01"),
  ("City2", "City3", "Hatchback", "XY5434", "100", "2020-02-01"),
  ("City3", "City1", "Sedan", "YU3456", "120", "2020-02-01"),
  ("City3", "City2", "Sedan", "BV3555", "105", "2020-02-01"),
  ("City2", "City1", "SUV", "PO1234", "75", "2020-02-01"),
  ("City1", "City3", "SUV", "TY4123", "125", "2020-02-01"),
  ("City1", "City2", "Hatchback", "VI3415", "85", "2020-02-01"),
  ("City1", "City2", "SUV", "VF1244", "84", "2020-02-01"),
  ("City3", "City1", "Sedan", "EW1248", "124", "2020-02-01"),
  ("City2", "City1", "Hatchback", "GE576", "82", "2020-02-01"),
  ("City3", "City2", "Sedan", "PK2144", "104", "2020-02-01"),
  ("City3", "City1", "Hatchback", "PJ1244", "118", "2020-02-01"),
  ("City3", "City2", "SUV", "WF0976", "98", "2020-02-01"),
  ("City1", "City2", "Sedan", "WE876", "78", "2020-02-01"),
  ("City2", "City1", "Hatchback", "AB5467", "80", "2020-02-01")
))
val inputDf = spark.createDataFrame(inputRdd).toDF("origin", "destination", "vehicleType", "uniqueId", "distanceTravelled", "date")

示例输出:

.csv 文件 1:

origin,destination,vehicleType,uniqueId,distanceTravelled,date
City1,City2,Sedan,AE1235,80,2020-02-01
City1,City2,Hatchback,VI3415,85,2020-02-01
City1,City2,SUV,VF1244,84,2020-02-01
City1,City2,Sedan,WE876,78,2020-02-01

.csv 文件 2:

origin,destination,vehicleType,uniqueId,distanceTravelled,date
City3,City1,Sedan,YU3456,120,2020-02-01
City3,City1,Sedan,EW1248,124,2020-02-01
City3,City1,Hatchback,PJ1244,118,2020-02-01

.csv 文件 3:

origin,destination,vehicleType,uniqueId,distanceTravelled,date
City2,City1,SUV,PO1234,75,2020-02-01
City2,City1,Hatchback,GE576,82,2020-02-01
City2,City1,Hatchback,AB5467,80,2020-02-01

到目前为止,我已经尝试将唯一的组合放入一个元组中,然后在其上使用 foreach,每次将过滤后的数据帧保存到 csv 时都过滤 inputDf

val tuple = inputDf.groupBy("origin","destination").count()
  .select("origin","destination").rdd.map(r => (r(0),r(1))).collect

tuple.foreach(row => {
  val origin = row._1
  val destination = row._2
  val dataToWrite = inputDf.filter(inputDf.col("origin").equalTo(origin) && inputDf.col("destination").equalTo(destination))
  dataToWrite.repartition(1).write.mode("overwrite").format("csv").option("header", "true").save("/path/to/output/folder/" + origin + "-" + destination + ".csv")
})

这种方法需要很多时间,因为它涉及每次过滤 inputDf,因为唯一组合的数量非常大。 这样做的最佳方法是什么?

编辑:每个 inputDf 将只有一个日期的数据。

输出应包含日期级别的文件。

喜欢:

/output/City1-City2/2020-02-01.csv

/output/City1-City2/2020-02-02.csv

/输出/City1-City2/2020-02-03.csv

/output/City3-City1/2020-02-01.csv

/output/City3-City1/2020-02-02.csv

... 等等

您可以使用partitionBy并根据您的组合在单独的 csv 文件中划分数据。 我使用了coalesce将所有数据保存在一个csv 文件中,如果您有大量数据,不建议这样做。 执行下面的代码,它将所有可能的组合写入单独的 csv 文件。

    scala> df.show()
+------+-----------+-----------+--------+-----------------+----------+
|origin|destination|vehicleType|uniqueId|distanceTravelled|      date|
+------+-----------+-----------+--------+-----------------+----------+
| City1|      City2|      Sedan|  AE1235|               80|2020-02-01|
| City2|      City3|  Hatchback|  XY5434|              100|2020-02-01|
| City3|      City1|      Sedan|  YU3456|              120|2020-02-01|
| City3|      City2|      Sedan|  BV3555|              105|2020-02-01|
| City2|      City1|        SUV|  PO1234|               75|2020-02-01|
| City1|      City3|        SUV|  TY4123|              125|2020-02-01|
| City1|      City2|  Hatchback|  VI3415|               85|2020-02-02|
| City1|      City2|        SUV|  VF1244|               84|2020-02-02|
| City3|      City1|      Sedan|  EW1248|              124|2020-02-02|
| City2|      City1|  Hatchback|   GE576|               82|2020-02-02|
| City3|      City2|      Sedan|  PK2144|              104|2020-02-02|
| City3|      City1|  Hatchback|  PJ1244|              118|2020-02-02|
| City3|      City2|        SUV|  WF0976|               98|2020-02-02|
| City1|      City2|      Sedan|   WE876|               78|2020-02-02|
| City2|      City1|  Hatchback|  AB5467|               80|2020-02-02|
+------+-----------+-----------+--------+-----------------+----------+


scala> val df1 = df.withColumn("combination", concat(col("origin") ,lit("-"), col("destination")))

scala> df1.coalesce(1).write.partitionBy("combination","date").format("csv").option("header", "true").mode("overwrite").save("/stackOut/")

输出将类似于:

在此处输入图片说明

在此处输入图片说明

在此处输入图片说明

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM