[英]spark Scala data frame select
I am trying to convert a pyspark code to spark Scala and i am facing the below error:我正在尝试将 pyspark 代码转换为火花 Scala 并且我面临以下错误:
pyspark code pyspark码
import pyspark.sql.functions as fn
valid_data = bcd_df.filter(fn.lower(bdb_df.table_name)==tbl_nme)
.select("valid_data").rdd
.map(lambda x: x[0])
.collect()[0]
From bcd_df
dataframe I am getting a column with table_name
and matching the value of table_name
with the argument tbl_name
that i am passing and then selecting the valid_data column data.从
bcd_df
dataframe 我得到一个带有table_name
的列,并将table_name
的值与我传递的参数tbl_name
匹配,然后选择 valid_data 列数据。
Here is the code in spark scala.这是火花 scala 中的代码。
val valid_data =bcd_df..filter(col(table_name)===tbl_nme).select(col("valid_data")).rdd.map(x=> x(0)).collect()(0)
Error as below:错误如下:
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`abcd`' given input
columns:
Not sure why it is taking abcd as column.
Any help is appreciated.任何帮助表示赞赏。
Version scala2.11.8 spark2.3版本scala2.11.8 spark2.3
Enclose table_name
column with quotes(")
in col
在
col
中用quotes(")
将table_name
列括起来
val valid_data =bcd_df.filter(col("table_name")===tbl_nme).select(col("valid_data")).rdd.map(x=> x(0)).collect()(0)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.