简体   繁体   中英

Get top values from a spark dataframe column in Scala

val df = sc.parallelize(Seq((201601, a),
  (201602, b),
  (201603, c),
  (201604, c),
  (201607, c),
  (201604, c),
  (201608, c),
  (201609, c),
  (201605, b))).toDF("col1", "col2")

I want to get top 3 values of col1. Can any please let me know the better way to do this.

Spark : 1.6.2 Scala : 2.10

You can do it like below.

df.select($"col1").orderBy($"col1".desc).limit(3).show()

You will get

+------+
|  col1|
+------+
|201609|
|201608|
|201607|
+------+

You can get same results in one more way using top function

Example:

val data=sc.parallelize(Seq(("maths",52),("english",75),("science",82), ("computer",65),("maths",85))).top(2)

Results:
(science,82)
(maths,85)

You can extract the maxDate firstly and then filter based on the maxDate :

val maxDate = df.agg(max("col1")).first().getAs[Int](0)
// maxDate: Int = 201609

def minusThree(date: Int): Int = {
    var Year = date/100
    var month = date%100
    if(month <= 3) { 
        Year -= 1
        month += 9
    } else { month -= 3}
    Year*100 + month
}

df.filter($"col1" > minusThree(maxDate)).show
+------+----+
|  col1|col2|
+------+----+
|201607|   c|
|201608|   c|
|201609|   c|
+------+----+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM