簡體   English   中英

使用Spark Scala進行透視后,按名稱選擇具有多個聚合列的列

[英]Select column by name with multiple aggregate columns after pivot with Spark Scala

我試圖在Scala Spark 2.0.1中的一個數據透視后聚合多個列:

scala> val df = List((1, 2, 3, None), (1, 3, 4, Some(1))).toDF("a", "b", "c", "d")
df: org.apache.spark.sql.DataFrame = [a: int, b: int ... 2 more fields]

scala> df.show
+---+---+---+----+
|  a|  b|  c|   d|
+---+---+---+----+
|  1|  2|  3|null|
|  1|  3|  4|   1|
+---+---+---+----+

scala> val pivoted = df.groupBy("a").pivot("b").agg(max("c"), max("d"))
pivoted: org.apache.spark.sql.DataFrame = [a: int, 2_max(`c`): int ... 3 more fields]

scala> pivoted.show
+---+----------+----------+----------+----------+
|  a|2_max(`c`)|2_max(`d`)|3_max(`c`)|3_max(`d`)|
+---+----------+----------+----------+----------+
|  1|         3|      null|         4|         1|
+---+----------+----------+----------+----------+

到目前為止,我無法選擇或重命名這些列:

scala> pivoted.select("3_max(`d`)")
org.apache.spark.sql.AnalysisException: syntax error in attribute name: 3_max(`d`);

scala> pivoted.select("`3_max(`d`)`")
org.apache.spark.sql.AnalysisException: syntax error in attribute name: `3_max(`d`)`;

scala> pivoted.select("`3_max(d)`")
org.apache.spark.sql.AnalysisException: cannot resolve '`3_max(d)`' given input columns: [2_max(`c`), 3_max(`d`), a, 2_max(`d`), 3_max(`c`)];

這里必須有一個簡單的伎倆,任何想法? 謝謝。

看起來像一個bug,后面的滴答聲引起了問題。 這里的一個解決方法是從列名中刪除后面的刻度:

val pivotedNewName = pivoted.columns.foldLeft(pivoted)((df, col) => 
                             df.withColumnRenamed(col, col.replace("`", "")))

現在您可以按列正常選擇列名:

pivotedNewName.select("2_max(c)").show
+--------+
|2_max(c)|
+--------+
|       3|
+--------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM