簡體   English   中英

Spark - 行值的總和

[英]Spark - Sum of row values

我有以下DataFrame:

January | February | March
-----------------------------
  10    |    10    |  10
  20    |    20    |  20
  50    |    50    |  50

我正在嘗試為此添加一列,這是每行值的總和。

January | February | March  | TOTAL
----------------------------------
  10    |    10    |   10   |  30
  20    |    20    |   20   |  60
  50    |    50    |   50   |  150

據我所知,所有內置的聚合函數似乎都是用於計算單列中的值。 如何在每行的基礎上跨列使用值(使用Scala)?

我已經到了

val newDf: DataFrame = df.select(colsToSum.map(col):_*).foreach ...

你非常接近這個:

val newDf: DataFrame = df.select(colsToSum.map(col):_*).foreach ...

相反,試試這個:

val newDf = df.select(colsToSum.map(col).reduce((c1, c2) => c1 + c2) as "sum")

我認為這是最好的答案,因為它與使用硬編碼的SQL查詢的答案一樣快,並且與使用UDF的答案一樣方便。 這是兩全其美的 - 我甚至沒有添加完整的代碼!

或者,使用Hugo的方法和示例,您可以創建一個接收任意數量的列並將它們全部sumUDF

from functools import reduce

def superSum(*cols):
   return reduce(lambda a, b: a + b, cols)

add = udf(superSum)

df.withColumn('total', add(*[df[x] for x in df.columns])).show()


+-------+--------+-----+-----+
|January|February|March|total|
+-------+--------+-----+-----+
|     10|      10|   10|   30|
|     20|      20|   20|   60|
+-------+--------+-----+-----+

此代碼使用Python,但可以輕松翻譯:

# First we create a RDD in order to create a dataFrame:
rdd = sc.parallelize([(10, 10,10), (20, 20,20)])
df = rdd.toDF(['January', 'February', 'March'])
df.show()

# Here, we create a new column called 'TOTAL' which has results
# from add operation of columns df.January, df.February and df.March

df.withColumn('TOTAL', df.January + df.February + df.March).show()

輸出:

+-------+--------+-----+
|January|February|March|
+-------+--------+-----+
|     10|      10|   10|
|     20|      20|   20|
+-------+--------+-----+

+-------+--------+-----+-----+
|January|February|March|TOTAL|
+-------+--------+-----+-----+
|     10|      10|   10|   30|
|     20|      20|   20|   60|
+-------+--------+-----+-----+

您還可以創建所需的用戶定義函數,這里是Spark文檔的鏈接: UserDefinedFunction(udf)

使用動態列選擇的Scala示例:

import sqlContext.implicits._
val rdd = sc.parallelize(Seq((10, 10, 10), (20, 20, 20)))
val df = rdd.toDF("January", "February", "March")
df.show()

+-------+--------+-----+
|January|February|March|
+-------+--------+-----+
|     10|      10|   10|
|     20|      20|   20|
+-------+--------+-----+

val sumDF = df.withColumn("TOTAL", df.columns.map(c => col(c)).reduce((c1, c2) => c1 + c2))
sumDF.show()

+-------+--------+-----+-----+
|January|February|March|TOTAL|
+-------+--------+-----+-----+
|     10|      10|   10|   30|
|     20|      20|   20|   60|
+-------+--------+-----+-----+

您可以使用expr()。在scala中使用

df.withColumn("TOTAL", expr("January+February+March"))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM