简体   繁体   English

Scala spark 如何与 List[Option[Map[String, DataFrame]]] 交互

[英]Scala spark how to interact with a List[Option[Map[String, DataFrame]]]

I'm trying to interact with this List[Option[Map[String, DataFrame]]] but I'm having a bit of trouble.我正在尝试与此 List[Option[Map[String, DataFrame]]] 进行交互,但遇到了一些麻烦。

Inside it has something like this:它里面有这样的东西:

customer1 -> dataframeX 
customer2 -> dataframeY 
customer3 -> dataframeZ

Where the customer is an identifier that will become a new column.其中客户是一个标识符,它将成为一个新列。

I need to do an union of dataframeX, dataframeY and dataframeZ (all df have the same columns).我需要合并dataframeX、dataframeY和dataframeZ(所有df都有相同的列)。 Before I had this:在我有这个之前:

map(_.get).reduce(_ union _).select(columns:_*)

And it was working fine because I only had a List[Option[DataFrame]] and didn't need the identifier but I'm having trouble with the new list.它工作正常,因为我只有一个 List[Option[DataFrame]] 并且不需要标识符,但我在新列表中遇到了问题。 My idea is to modify my old mapping, I know I can do stuff like "(0).get" and that would bring me "Map(customer1 -> dataframeX)" but I'm not quite sure how to do that iteration in the mapping and get the final dataframe that is the union of all three plus the identifier.我的想法是修改我的旧映射,我知道我可以执行“(0).get”之类的操作,这会给我带来“Map(customer1 -> dataframeX)”,但我不太确定如何在映射并获得最终的 dataframe ,它是所有三个加上标识符的并集。 My idea:我的想法:

map(/*get identifier here along with dataframe*/).reduce(_ union _).select(identifier +: columns:_*)

The final result would be something like:最终结果将类似于:

-------------------------------
|identifier | product  |State | 
-------------------------------
|  customer1|  prod1   |  VA  |
|  customer1|  prod132 |  VA  |
|  customer2|  prod32  |  CA  | 
|  customer2|  prod51  |  CA  |
|  customer2|  prod21  |  AL  |
|  customer2|  prod52  |  AL  |
-------------------------------

You could use collect to unnest Option[Map[String, Dataframe]] to Map[String, DataFrame] .您可以使用collectOption[Map[String, Dataframe]]Map[String, DataFrame] To put an identifier into the column you should use withColumn .要将标识符放入列中,您应该使用withColumn So your code could look like:所以你的代码可能看起来像:

import org.apache.spark.sql.functions.lit

val result: DataFrame = frames.collect {
    case Some(m) =>
      m.map {
        case (identifier, dataframe) => dataframe.withColumn("identifier", lit(identifier))
      }.reduce(_ union _)
  }.reduce(_ union _)

Something like this perhaps?大概是这样的?

list
  .flatten 
  .flatMap { 
    _.map { case (id, df) => 
      df.withColumn("identifier", id) } 
  }.reduce(_ union _)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM