繁体   English   中英

根据列值在Spark Dataframe上进行迭代

[英]Iterate on a Spark Dataframe based on column value

我在Spark中有一个数据框,其中包含以下数据

{ID:"1",CNT:"2", Age:"21", Class:"3"}   
{ID:"2",CNT:"3", Age:"24", Class:"5"}

我想基于CNT值遍历数据帧并生成如下输出:

{ID:"1",CNT:"1", Age:"21", Class:"3"}  
{ID:"1",CNT:"2", Age:"21", Class:"3"}  
{ID:"2",CNT:"1", Age:"24", Class:"5"}  
{ID:"2",CNT:"2", Age:"24", Class:"5"}  
{ID:"2",CNT:"3", Age:"24", Class:"5"}

有人可以知道如何实现这一目标吗?

您可以将数据框转换为rdd ,使用flatMap进行扩展,然后将其转换回数据框:

val df = Seq((1,2,21,3),(2,3,24,5)).toDF("ID", "CNT", "Age", "Class")

case class Person(ID: Int, CNT: Int, Age: Int, Class: Int)

df.as[Person].rdd.flatMap(p => (1 to p.CNT).map(Person(p.ID, _, p.Age, p.Class))).toDF.show
+---+---+---+-----+
| ID|CNT|Age|Class|
+---+---+---+-----+
|  1|  1| 21|    3|
|  1|  2| 21|    3|
|  2|  1| 24|    5|
|  2|  2| 24|    5|
|  2|  3| 24|    5|
+---+---+---+-----+

以防万一您只喜欢使用数据框的解决方案,我们开始:

case class Person(ID: Int, CNT: Int, Age: Int, Class: Int)

val iterations: (Int => Array[Int]) = (input: Int) => {
  (1 to input).toArray[Int]
}
val udf_iterations = udf(iterations)

val p1 = Person(1, 2, 21, 3)
val p2 = Person(2, 3, 24, 5)

val records = Seq(p1, p2)
val df = spark.createDataFrame(records)

df.withColumn("CNT-NEW", explode(udf_iterations(col("CNT"))))
  .drop(col("CNT"))
  .withColumnRenamed("CNT-NEW", "CNT")
  .select(df.columns.map(col): _*)
  .show(false)

+---+---+---+-----+
|ID |CNT|Age|Class|
+---+---+---+-----+
|1  |1  |21 |3    |
|1  |2  |21 |3    |
|2  |1  |24 |5    |
|2  |2  |24 |5    |
|2  |3  |24 |5    |
+---+---+---+-----+

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM