[英]how to concat all columns in a spark dataframe, using java?
This is how I do do for 2 specific columns:这就是我对 2 个特定列的处理方式:
dataSet.withColumn("colName", concat(dataSet.col("col1"), lit(","),dataSet.col("col2") ));
but dataSet.columns()
retruns Sting array, and not Column array.但是dataSet.columns()
重新运行 Sting 数组,而不是 Column 数组。 How should I craete a List<Column>
?我应该如何创建一个List<Column>
?
Thanks!谢谢!
Simple Way - Instead of df.columns
use concat_ws(",","*")
, Check below code.简单方法- 而不是df.columns
使用concat_ws(",","*")
,检查下面的代码。
df.withColumn("colName",expr("concat_ws(',',*)")).show(false)
+---+--------+---+-------------+
|id |name |age|colName |
+---+--------+---+-------------+
|1 |Srinivas|29 |1,Srinivas,29|
|2 |Ravi |30 |2,Ravi,30 |
+---+--------+---+-------------+
This is how I do do for 2 specific columns:这就是我为 2 个特定列所做的事情:
dataSet.withColumn("colName", concat(dataSet.col("col1"), lit(","),dataSet.col("col2") ));
but dataSet.columns()
retruns Sting array, and not Column array.但dataSet.columns()
会重新运行 Sting 数组,而不是 Column 数组。 How should I craete a List<Column>
?我应该如何创建List<Column>
?
Thanks!谢谢!
Java has more verbose syntax. Java 的语法更冗长。 Try this -试试这个 -
df.withColumn("colName",concat_ws(",", toScalaSeq(Arrays.stream(df.columns()).map(functions::col).collect(Collectors.toList()))));
Use below utility to convert java list to scala seq-使用以下实用程序将 java 列表转换为 scala seq-
<T> Buffer<T> toScalaSeq(List<T> list) {
return JavaConversions.asScalaBuffer(list);
}
If someone is looking for a way to concat all the columns of a DataFrame in Scala, this is what worked for me:如果有人正在寻找一种方法来连接 Scala 中 DataFrame 的所有列,这对我有用:
val df_new = df.withColumn(new_column_name, concat_ws("-", df.columns.map(col): _*))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.