简体   繁体   中英

calculate the average by ignoring the 0 values in column

Input:

item   loc    month   year     qty    
watch  delhi   1       2020     10    
watch  delhi   2       2020     0     
watch  delhi   3       2020     20    
watch  delhi   4       2020     30    
watch  delhi   5       2020     40    
watch  delhi   6       2020     50 

Output:

item   loc    month   year     qty    avg
watch  delhi   1       2020     10    0
watch  delhi   2       2020     0     10
watch  delhi   3       2020     20    10
watch  delhi   4       2020     30    20
watch  delhi   5       2020     40    25
watch  delhi   6       2020     50    35

we need to calculate the avg for the previous two months....but there is a condition required while calculating the average.............we don't need to consider the qty=0 while calculating the average.....

For example: for month 3 ideally the average should be 10+0/2=5....but since we need to ignore the qty=0...so for month 3 the average will be 10/1=10....

Thanks in advance

In SQL, you can use window functions with a window frame specifier:

select t.*,
       coalesce(avg(nullif(qty, 0)) over (partition by item, loc
                                          order by month
                                          rows between 2 preceding and 1 preceding
                                         ),
                0) as qty_avg
from t;

From the spark,

val w = Window.partitionBy("item","loc").orderBy("month").rangeBetween(-2, -1)
df.withColumn("month", 'month.cast("int"))
  .withColumn("avg", avg(when('qty =!= lit(0), 'qty)).over(w)).show()

+-----+-----+-----+----+---+----+
| item|  loc|month|year|qty| avg|
+-----+-----+-----+----+---+----+
|watch|delhi|    1|2020| 10| 0.0|
|watch|delhi|    2|2020|  0|10.0|
|watch|delhi|    3|2020| 20|10.0|
|watch|delhi|    4|2020| 30|20.0|
|watch|delhi|    5|2020| 40|25.0|
|watch|delhi|    6|2020| 50|35.0|
+-----+-----+-----+----+---+----+

It can be done using in spark using lag function and WindowFrame

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.IntegerType



df.withColumn("month", col("month").cast(IntegerType))
.withColumn("avg", when(lag("qty", 2, 0).over(w) =!= lit(0) && lag("qty", 1, 0).over(w) =!= lit(0),
(lag("qty", 2, 0).over(w) + lag("qty", 1, 0).over(w)).divide(lit(2)))
.when(lag("qty", 1, 0).over(w) =!= lit(0),lag("qty", 1, 0).over(w)).otherwise(lag("qty", 2, 0)
.over(w))).show()

output :

+-----+-----+-----+----+---+----+
| item|  loc|month|year|qty| avg|
+-----+-----+-----+----+---+----+
|watch|delhi|    1|2020| 10|   0|
|watch|delhi|    2|2020|  0|  10|
|watch|delhi|    3|2020| 20|  10|
|watch|delhi|    4|2020| 30|  20|
|watch|delhi|    5|2020| 40|25.0|
|watch|delhi|    6|2020| 50|35.0|
+-----+-----+-----+----+---+----+

I think that's a conditional indow average:

select 
    t.*,
    coalesce(avg(nullif(qty, 0)) over(partition by item, loc order by month), 0) qty_avg
from mytable t

nullif() yields null for 0 values - which avg() then ignores. I wrapped the entire window average with coalesce() , since you seem to want 0 when there are null values only.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM