[英]Count(*) equivalent for Spark SQL in Scala
I want to count the number of rows after aggregating some dataset with more than 1 column, for example例如,我想在聚合一些超过 1 列的数据集后计算行数
val iWantToCount = someDataSet
.groupBy($"x", $"y")
.agg(count().as("Num_of_rows"))
but there is not overload for count
which takes no arguments.但不带参数的count
没有重载。
any other options I have?我还有其他选择吗?
edit:编辑:
does count("*")
is the right way to go? count("*")
是正确的方法吗?
Try this script (the below import is required for using lit
):试试这个脚本(使用lit
需要以下导入):
import.spark.implicits._
//dummy data
val df = Seq((1, "qwe", 1200),
(1, "qwe", 1234),
(1, "rte", 4673),
(2, "ewr", 4245), (2, "ewr", 8973)
).toDF("col1", "col2", "col3")
df.groupBy("col1","col2").agg(count(lit(1)).alias("num_of_rows")).show
The data is grouped based on 1st two columns and deriving the count in new column.数据基于第一个两列分组并在新列中派生计数。
import spark.implicits._
val df = Seq((1, "qwe", 1200),
(1, "qwe", 1234),
(1, "rte", 4673),
(2, "ewr", 4245), (2, "ewr", 8973)
).toDF("col1", "col2", "col3")
println(df.groupBy("col1", "col2").count().count())
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.