簡體   English   中英

將foreach中給定的多個數據幀合並到一個dataframe - Scala spark

[英]Merge multiple dataframes given within a foreach to one dataframe - Scala spark

我有兩個 csv 文件,如下所示。

a.csv

ID,Name,Age,Subject
1,Arun,23,English
2,Melan,22,IT

b.csv 

ID,Name,Department_ID,Age,Subject
3,Kumar,004,21,Science
4,Sagar,008,20,IT

如您所見,這些文件結構是不同的。 我只想要IDSubject列。 所以我列出了文件的路徑並執行以下操作。

val cols = List("ID", "Subject")

val file_path = List("path to a.csv", "path to b.csv") 

file_path.foreach(path => {

      val df =
        spark
          .read
          .option( "header", "true" )
          .option( "delimiter", "," )
          .csv(path )
          .select(cols.head, cols.tail: _*)

      df.show()
      df.count()

    })

第一個 dataframe

## +---+--------+
## |ID|Subject  |
## +--+---------+
## | 1|  English|
## | 2|       IT|
## +--+---------+

2號 Dataframe

##+---+---------+
## |ID|Subject  |
## +--+---------+
## | 3|  Science|
## | 4|       IT|
## +--+---------+

但是我需要一個 dataframe 通過合並這兩個數據幀。 如下圖,

## +---+--------+
## |ID|Subject  |
## +--+---------+
## | 1|  English|
## | 2|       IT|
## | 3|  Science|
## | 4|       IT|
## +--+---------+

有沒有辦法做到這一點? 我不想將這兩個數據幀寫入文件並將它們作為一個讀取。

謝謝你。

使用map & reduce而不是foreach方法來實現這一點。

請檢查以下

scala> val dfr = spark.read.format("csv").option("header","true")
dfr: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader@cd6ccda

scala> val paths = List("/tmp/data/da.csv","/tmp/data/db.csv")
paths: List[String] = List(/tmp/data/da.csv, /tmp/data/db.csv)

scala> val columns = List("id","subject").map(c => col(c))
columns: List[org.apache.spark.sql.Column] = List(id, subject)

scala> spark.time { paths.map(path => dfr.load(path).select(columns:_*)).reduce(_ union _).show(false) }
+---+-------+
|id |subject|
+---+-------+
|1  |English|
|2  |IT     |
|3  |Science|
|4  |IT     |
+---+-------+

Time taken: 247 ms

scala>

Edit由於兩個文件有不同的架構,一次加載所有文件會給你錯誤的結果,請檢查下面。

scala> val da = spark.read.option("header","true").csv("/tmp/data/da.csv")
da: org.apache.spark.sql.DataFrame = [id: string, name: string ... 2 more fields]

scala> da.show(false)
+---+-----+---+-------+
|id |name |age|subject|
+---+-----+---+-------+
|1  |Arun |23 |English|
|2  |Melan|22 |IT     |
+---+-----+---+-------+


scala> val db = spark.read.option("header","true").csv("/tmp/data/db.csv")
db: org.apache.spark.sql.DataFrame = [id: string, name: string ... 3 more fields]

scala> db.show(false)
+---+-----+-------------+---+-------+
|id |name |department_id|age|subject|
+---+-----+-------------+---+-------+
|3  |Kumar|004          |21 |Science|
|4  |Sagar|008          |20 |IT     |
+---+-----+-------------+---+-------+


scala> val paths = List("/tmp/data/da.csv","/tmp/data/db.csv")
paths: List[String] = List(/tmp/data/da.csv, /tmp/data/db.csv)

scala> val columns = List("id","subject").map(c => col(c))
columns: List[org.apache.spark.sql.Column] = List(id, subject)

scala> spark.read.option("header", "true" ).option("delimiter", "," ).csv(paths: _* ).select(columns:_*).show(false)
20/04/29 18:35:07 WARN CSVDataSource: CSV header does not conform to the schema.
 Header: id,
 Schema: id, subject
Expected: subject but found:
CSV file: file:///tmp/data/da.csv
+---+-------+
|id |subject|
+---+-------+
|3  |Science|
|4  |IT     |
|1  |null   |
|2  |null   |
+---+-------+

scala> spark.read.option("header", "true" ).option("delimiter", "," ).csv(paths: _* ).select("id","name").show(false) // common columns from both fiels - id,name
+---+-----+
|id |name |
+---+-----+
|3  |Kumar|
|4  |Sagar|
|1  |Arun |
|2  |Melan|
+---+-----+

scala> spark.read.option("header", "true" ).option("delimiter", "," ).csv(paths: _* ).select("id","name","age").show(false) // file-1 has - id,name,age, file-2 has - id,name,department_id,age , in this age came after department_id
20/04/29 18:43:53 WARN CSVDataSource: CSV header does not conform to the schema.
 Header: id, name, subject
 Schema: id, name, age
Expected: age but found: subject
CSV file: file:///tmp/data/da.csv
+---+-----+-------+
|id |name |age    |
+---+-----+-------+
|3  |Kumar|21     |
|4  |Sagar|20     |
|1  |Arun |English|
|2  |Melan|IT     |
+---+-----+-------+

Spark Dataframe 具有一次從多個文件加載的內置功能。 我認為與其單獨加載它們然后加入它們,不如在一個調用中加載它們,如下所示。

object LoadJoinDataframe {

  def main(args: Array[String]): Unit = {
    val cols = List("ID", "Subject")

    val file_path = List("path to a.csv", "path to b.csv")


    val spark = Constant.getSparkSess
    val df = spark
      .read
      .option( "header", "true" )
      .option( "delimiter", "," )
      .csv(file_path: _* )
      .select(cols.head, cols.tail: _*)
    df.show()
    df.count()

  }

}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM