简体   繁体   中英

Spark sql group by and sum changing column name?

In this data frame I am finding total salary from each group. In Oracle I'd use this code

select job_id,sum(salary) as "Total" from hr.employees group by job_id;

In Spark SQL tried the same, I am facing two issues

empData.groupBy($"job_id").sum("salary").alias("Total").show()
  1. The alias total is not displaying instead it is showing "sum(salary)" column
  2. I could not use $ (I think Scala SQL syntax). Getting compilation issue

      empData.groupBy($"job_id").sum($"salary").alias("Total").show() 

Any idea?

Use Aggregate function .agg() if you want to provide alias name. This accepts scala syntax ($" ")

empData.groupBy($"job_id").agg(sum($"salary") as "Total").show()

If you dont want to use .agg() , alias name can be also be provided using .select() :

empData.groupBy($"job_id").sum("salary").select($"job_id", $"sum(salary)".alias("Total")).show()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM