简体   繁体   English

从Spark数据框中选择几列,并以列列表作为开始

[英]Selecting several columns from spark dataframe with a list of columns as a start

Assuming that I have a list of spark columns and a spark dataframe df, what is the appropriate snippet of code in order to select a subdataframe containing only the columns in the list? 假设我有一个spark列的列表和一个spark数据框df,为了选择仅包含列表中的列的子数据框,合适的代码段是什么?

Something similar to maybe: 类似于以下内容:

var needed_column: List[Column]=List[Column](new Column("a"),new Column("b"))

df(needed_columns)

I wanted to get the columns names then select them using the following line of code. 我想获取列名称,然后使用以下代码行选择它们。

Unfortunately, the column name seems to be in write mode only. 不幸的是,列名似乎仅处于写模式。

df.select(needed_columns.head.as(String),needed_columns.tail: _*)

Your needed_columns is of type List[Column] , hence you can simply use needed_columns: _* as the arguments for select : 您的needed_columns类型为List[Column] ,因此您可以简单地使用needed_columns: _*作为select的参数:

val df = Seq((1, "x", 10.0), (2, "y", 20.0)).toDF("a", "b", "c")

import org.apache.spark.sql.Column

val needed_columns: List[Column] = List(new Column("a"), new Column("b"))

df.select(needed_columns: _*)
// +---+---+
// |  a|  b|
// +---+---+
// |  1|  x|
// |  2|  y|
// +---+---+

Note that select takes two types of arguments: 请注意, select接受两种类型的参数:

def select(cols: Column*): DataFrame

def select(col: String, cols: String*): DataFrame

If you have a list of column names of String type, you can use the latter select : 如果您具有String类型的列名列表,则可以使用后者select

val needed_col_names: List[String] = List("a", "b")

df.select(needed_col_names.head, needed_col_names.tail: _*)

Or, you can map the list of String s to Column s to use the former select 或者,您可以将String的列表映射到Column以使用前一个select

df.select(needed_col_names.map(col): _*)

I understand that you want to select only those columns from a list(A)other than the dataframe columns. 我知道您只想从列表(A)中选择那些列,而不是数据框列。 I have a below example, where I select the firstname and lastname using a separate list. 我有一个下面的示例,其中使用单独的列表选择名字和姓氏。 check this out 看一下这个

scala> val df = Seq((101,"Jack", "wright" , 27, "01976", "US")).toDF("id","fname","lname","age","zip","country")
df: org.apache.spark.sql.DataFrame = [id: int, fname: string ... 4 more fields]

scala> df.columns
res20: Array[String] = Array(id, fname, lname, age, zip, country)

scala> val needed =Seq("fname","lname")
needed: Seq[String] = List(fname, lname)

scala> val needed_df = needed.map( x=> col(x) )
needed_df: Seq[org.apache.spark.sql.Column] = List(fname, lname)

scala> df.select(needed_df:_*).show(false)
+-----+------+
|fname|lname |
+-----+------+
|Jack |wright|
+-----+------+


scala>

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM