[英]List of columns for orderBy in spark dataframe
I have a list of variables that contains column names.我有一个包含列名的变量列表。 I am trying to use that to call orderBy on a dataframe.
我正在尝试使用它在 dataframe 上调用 orderBy。
val l = List("COL1", "COL2")
df.orderBy(l.mkString(","))
But mkstring
combines the column names to be one string, leading to this error -但是
mkstring
将列名组合为一个字符串,导致此错误 -
org.apache.spark.sql.AnalysisException: cannot resolve '`COL1,COL2`' given input columns: [COL1, COL2, COL3, COL4];
How can I convert this list of strings into different strings so it looks for "COL1", "COL2" instead of "COL1,COL2"?如何将此字符串列表转换为不同的字符串,以便查找“COL1”、“COL2”而不是“COL1、COL2”? Thanks,
谢谢,
You can call orderBy for a specific column:您可以为特定列调用 orderBy:
import org.apache.spark.sql.functions._
df.orderBy(asc("COL1")) // df.orderBy(asc(l.headOption.getOrElse("COL1")))
// OR
df.orderBy(desc("COL1"))
If you want sort by multiple columns you can write something like this:如果要按多列排序,可以编写如下内容:
val l = List($"COL1", $"COL2".desc)
df.sort(l: _*)
Passing single String
argument is telling Spark to sort data frame using one column with given name.传递单个
String
参数是告诉 Spark 使用具有给定名称的一列对数据框进行排序。 There is a method that accepts multiple column names and you can use it that way:有一种方法可以接受多个列名,您可以这样使用它:
val l = List("COL1", "COL2")
df.orderBy(l.head, l.tail: _*)
If you care about the order use Column
version of orderBy
instead如果您关心订单,请改用
Column
版本的orderBy
val l = List($"COL1", $"COL2".desc)
df.orderBy(l: _*)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.