简体   繁体   English

如何根据spark-scala中的唯一列将数据帧保存到多个文件中

[英]How to save a dataframe into multiple files based on unique columns in spark-scala

I've the inputDf that I need to divide based on the columns origin and destination and save each unique combination into a different csv file.我有需要根据列来源目的地划分的inputDf并将每个唯一组合保存到不同的 csv 文件中。

(Using Spark 2.4.4) (使用 Spark 2.4.4)

val spark: SparkSession = SparkSession.builder().appName("Test").getOrCreate()

val inputRdd: RDD[(String, String, String, String, String, String)] = spark.sparkContext.parallelize(Seq(
  ("City1", "City2", "Sedan", "AE1235", "80", "2020-02-01"),
  ("City2", "City3", "Hatchback", "XY5434", "100", "2020-02-01"),
  ("City3", "City1", "Sedan", "YU3456", "120", "2020-02-01"),
  ("City3", "City2", "Sedan", "BV3555", "105", "2020-02-01"),
  ("City2", "City1", "SUV", "PO1234", "75", "2020-02-01"),
  ("City1", "City3", "SUV", "TY4123", "125", "2020-02-01"),
  ("City1", "City2", "Hatchback", "VI3415", "85", "2020-02-01"),
  ("City1", "City2", "SUV", "VF1244", "84", "2020-02-01"),
  ("City3", "City1", "Sedan", "EW1248", "124", "2020-02-01"),
  ("City2", "City1", "Hatchback", "GE576", "82", "2020-02-01"),
  ("City3", "City2", "Sedan", "PK2144", "104", "2020-02-01"),
  ("City3", "City1", "Hatchback", "PJ1244", "118", "2020-02-01"),
  ("City3", "City2", "SUV", "WF0976", "98", "2020-02-01"),
  ("City1", "City2", "Sedan", "WE876", "78", "2020-02-01"),
  ("City2", "City1", "Hatchback", "AB5467", "80", "2020-02-01")
))
val inputDf = spark.createDataFrame(inputRdd).toDF("origin", "destination", "vehicleType", "uniqueId", "distanceTravelled", "date")

Sample Output:示例输出:

csv file 1: .csv 文件 1:

origin,destination,vehicleType,uniqueId,distanceTravelled,date
City1,City2,Sedan,AE1235,80,2020-02-01
City1,City2,Hatchback,VI3415,85,2020-02-01
City1,City2,SUV,VF1244,84,2020-02-01
City1,City2,Sedan,WE876,78,2020-02-01

csv file 2: .csv 文件 2:

origin,destination,vehicleType,uniqueId,distanceTravelled,date
City3,City1,Sedan,YU3456,120,2020-02-01
City3,City1,Sedan,EW1248,124,2020-02-01
City3,City1,Hatchback,PJ1244,118,2020-02-01

csv file 3: .csv 文件 3:

origin,destination,vehicleType,uniqueId,distanceTravelled,date
City2,City1,SUV,PO1234,75,2020-02-01
City2,City1,Hatchback,GE576,82,2020-02-01
City2,City1,Hatchback,AB5467,80,2020-02-01

So far I've tried getting the unique combinations into a tuple and then using a foreach on it, filtering the inputDf each time saving the filtered dataframe to csv到目前为止,我已经尝试将唯一的组合放入一个元组中,然后在其上使用 foreach,每次将过滤后的数据帧保存到 csv 时都过滤 inputDf

val tuple = inputDf.groupBy("origin","destination").count()
  .select("origin","destination").rdd.map(r => (r(0),r(1))).collect

tuple.foreach(row => {
  val origin = row._1
  val destination = row._2
  val dataToWrite = inputDf.filter(inputDf.col("origin").equalTo(origin) && inputDf.col("destination").equalTo(destination))
  dataToWrite.repartition(1).write.mode("overwrite").format("csv").option("header", "true").save("/path/to/output/folder/" + origin + "-" + destination + ".csv")
})

This approach takes a lot of time as it involves filtering the inputDf every single time as the number of unique combinations are pretty huge.这种方法需要很多时间,因为它涉及每次过滤 inputDf,因为唯一组合的数量非常大。 What would be an optimal way to do it?这样做的最佳方法是什么?

EDIT: Each inputDf will have data only for one date.编辑:每个 inputDf 将只有一个日期的数据。

The output should contain files at date level.输出应包含日期级别的文件。

Like:喜欢:

/output/City1-City2/2020-02-01.csv /output/City1-City2/2020-02-01.csv

/output/City1-City2/2020-02-02.csv /output/City1-City2/2020-02-02.csv

/output/City1-City2/2020-02-03.csv /输出/City1-City2/2020-02-03.csv

/output/City3-City1/2020-02-01.csv /output/City3-City1/2020-02-01.csv

/output/City3-City1/2020-02-02.csv /output/City3-City1/2020-02-02.csv

... and so on ... 等等

You can use partitionBy and divide data in separate csv file as per your combination.您可以使用partitionBy并根据您的组合在单独的 csv 文件中划分数据。 I have used coalesce to keep all data into one csv file which is not recommended if you have large data.我使用了coalesce将所有数据保存在一个csv 文件中,如果您有大量数据,不建议这样做。 GO through below code which will write all possible combination into separate csv files.执行下面的代码,它将所有可能的组合写入单独的 csv 文件。

    scala> df.show()
+------+-----------+-----------+--------+-----------------+----------+
|origin|destination|vehicleType|uniqueId|distanceTravelled|      date|
+------+-----------+-----------+--------+-----------------+----------+
| City1|      City2|      Sedan|  AE1235|               80|2020-02-01|
| City2|      City3|  Hatchback|  XY5434|              100|2020-02-01|
| City3|      City1|      Sedan|  YU3456|              120|2020-02-01|
| City3|      City2|      Sedan|  BV3555|              105|2020-02-01|
| City2|      City1|        SUV|  PO1234|               75|2020-02-01|
| City1|      City3|        SUV|  TY4123|              125|2020-02-01|
| City1|      City2|  Hatchback|  VI3415|               85|2020-02-02|
| City1|      City2|        SUV|  VF1244|               84|2020-02-02|
| City3|      City1|      Sedan|  EW1248|              124|2020-02-02|
| City2|      City1|  Hatchback|   GE576|               82|2020-02-02|
| City3|      City2|      Sedan|  PK2144|              104|2020-02-02|
| City3|      City1|  Hatchback|  PJ1244|              118|2020-02-02|
| City3|      City2|        SUV|  WF0976|               98|2020-02-02|
| City1|      City2|      Sedan|   WE876|               78|2020-02-02|
| City2|      City1|  Hatchback|  AB5467|               80|2020-02-02|
+------+-----------+-----------+--------+-----------------+----------+


scala> val df1 = df.withColumn("combination", concat(col("origin") ,lit("-"), col("destination")))

scala> df1.coalesce(1).write.partitionBy("combination","date").format("csv").option("header", "true").mode("overwrite").save("/stackOut/")

Output will be like:输出将类似于:

在此处输入图片说明

在此处输入图片说明

在此处输入图片说明

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM