![](/img/trans.png)
[英]scala/spark - group dataframe and select values from other column as dataframe
[英]Get top values from a spark dataframe column in Scala
val df = sc.parallelize(Seq((201601, a),
(201602, b),
(201603, c),
(201604, c),
(201607, c),
(201604, c),
(201608, c),
(201609, c),
(201605, b))).toDF("col1", "col2")
我想获取col1的前3个值。 有什么可以让我知道更好的方法吗?
Spark:1.6.2 Scala:2.10
您可以按照以下步骤进行操作。
df.select($"col1").orderBy($"col1".desc).limit(3).show()
你会得到
+------+
| col1|
+------+
|201609|
|201608|
|201607|
+------+
您可以使用top函数以另一种方式获得相同的结果
例:
val data=sc.parallelize(Seq(("maths",52),("english",75),("science",82), ("computer",65),("maths",85))).top(2)
Results:
(science,82)
(maths,85)
您可以先提取maxDate ,然后根据maxDate进行过滤:
val maxDate = df.agg(max("col1")).first().getAs[Int](0)
// maxDate: Int = 201609
def minusThree(date: Int): Int = {
var Year = date/100
var month = date%100
if(month <= 3) {
Year -= 1
month += 9
} else { month -= 3}
Year*100 + month
}
df.filter($"col1" > minusThree(maxDate)).show
+------+----+
| col1|col2|
+------+----+
|201607| c|
|201608| c|
|201609| c|
+------+----+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.