简体   繁体   English

如何在 Scala Spark DF 中减少和求和网格

[英]How to reduce and sum grids with in Scala Spark DF

Is it possible to reduce nxn grid within a Scala Spark DF to the total sum of the grid and create new df?是否可以将 Scala Spark DF 中的 nxn 网格减少到网格的总和并创建新的 df? Existing df:现有的df:

1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 1 0 0 0 0 1 1
0 1 0 0 0 0 1 0
0 0 0 0 1 0 0 0

if n = 4 then it can we take 4x4 grid out of this df, sum them?如果 n = 4 那么我们可以从这个 df 中取出 4x4 网格,将它们相加吗?

1 1 0 0 | 0 0 0 0
0 0 0 0 | 0 0 1 0
0 1 0 0 | 0 0 0 0
0 0 0 0 | 0 0 0 0
------------------
0 0 0 0 | 0 0 0 0
0 1 0 0 | 0 0 1 1
0 1 0 0 | 0 0 1 0
0 0 0 0 | 1 0 0 0

and get this output?并得到这个 output?

3 1
2 4

For row wise, you have to aggregate and for column wise you have to sum.对于行明智,您必须聚合,而对于列明智,您必须求和。 example code for 2x2 2x2 的示例代码

import pyspark.sql.functions as F
from pyspark.sql.types import *
from pyspark.sql.window import Window
#Create test data frame
tst= sqlContext.createDataFrame([(1,1,2,11),(1,3,4,12),(1,5,6,13),(1,7,8,14),(2,9,10,15),(2,11,12,16),(2,13,14,17),(2,13,14,17)],schema=['col1','col2','col3','col4'])
w=Window.orderBy(F.monotonically_increasing_id())
tst1= tst.withColumn("grp",F.ceil(F.row_number().over(w)/2)) # 2 is for this example - change to 4
tst_sum_row = tst1.groupby('grp').agg(*[F.sum(coln).alias(coln) for coln in tst1.columns if 'grp' not in coln])
expr =[sum([F.col(tst.columns[i]),F.col(tst.columns[i+1])]).alias('coln'+str(i)) for i in [x*2 for x in (range(len(tst.columns)/2))]] # The sum used here is python inbuilt sum and not pyspark sum function which is referred as F.sum()
tst_sum_coln = tst_sum_row.select(*expr)

tst.show()
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
|   1|   1|   2|  11|
|   1|   3|   4|  12|
|   1|   5|   6|  13|
|   1|   7|   8|  14|
|   2|   9|  10|  15|
|   2|  11|  12|  16|
|   2|  13|  14|  17|
|   2|  13|  14|  17|
+----+----+----+----+

In [21]: tst_sum_coln.show()
+-----+-----+
|coln0|coln2|
+-----+-----+
|    6|   29|
|   14|   41|
|   24|   53|
|   30|   62|
+-----+-----+

Check below code.检查下面的代码。

scala> df.show(false)
+---+---+---+---+---+---+---+---+
|a  |b  |c  |d  |e  |f  |g  |h  |
+---+---+---+---+---+---+---+---+
|1  |1  |0  |0  |0  |0  |0  |0  |
|0  |0  |0  |0  |0  |0  |1  |0  |
|0  |1  |0  |0  |0  |0  |0  |0  |
|0  |0  |0  |0  |0  |0  |0  |0  |
|0  |0  |0  |0  |0  |0  |0  |0  |
|0  |1  |0  |0  |0  |0  |1  |1  |
|0  |1  |0  |0  |0  |0  |1  |0  |
|0  |0  |0  |0  |1  |0  |0  |0  |
+---+---+---+---+---+---+---+---+
scala> val n = 4

This will divide or group rows into 2 each group has 4 rows of data.这会将行划分或分组为 2,每组有 4 行数据。

scala> val rowExpr = ntile(n/2)
.over(
    Window
    .orderBy(lit(1))
)

Collecting all values into array of array.将所有值收集到数组数组中。

scala> val aggExpr = df
.columns
.grouped(4)
.toList.map(c => collect_list(array(c.map(col):_*)).as(c.mkString))

Flattening array, removing 0 values & taking size of array.展平数组,删除 0 值并获取数组的大小。

scala> val selectExpr = df
.columns
.grouped(4)
.toList
.map(c => size(array_remove(flatten(col(c.mkString)),0)).as(c.mkString))

Applying rowExpr & selectExpr应用rowExpr & selectExpr

scala> df
.withColumn("row_id",rowExpr)
.groupBy($"row_id")
.agg(aggExpr.head,aggExpr.tail:_*)
.select(selectExpr:_*)
.show(false)

Final Output最终 Output

+----+----+
|abcd|efgh|
+----+----+
|3   |1   |
|2   |4   |
+----+----+

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM