简体   繁体   中英

Count(*) equivalent for Spark SQL in Scala

I want to count the number of rows after aggregating some dataset with more than 1 column, for example

val iWantToCount = someDataSet
      .groupBy($"x", $"y")
      .agg(count().as("Num_of_rows"))
      

but there is not overload for count which takes no arguments.

any other options I have?

edit:

does count("*") is the right way to go?

Try this script (the below import is required for using lit ):

import.spark.implicits._

//dummy data    
val df = Seq((1, "qwe", 1200),
    (1, "qwe", 1234),
    (1, "rte", 4673),
    (2, "ewr", 4245), (2, "ewr", 8973)
).toDF("col1", "col2", "col3")

df.groupBy("col1","col2").agg(count(lit(1)).alias("num_of_rows")).show

The data is grouped based on 1st two columns and deriving the count in new column.

import spark.implicits._

val df = Seq((1, "qwe", 1200),
    (1, "qwe", 1234),
    (1, "rte", 4673),
    (2, "ewr", 4245), (2, "ewr", 8973)
).toDF("col1", "col2", "col3")

println(df.groupBy("col1", "col2").count().count())

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM