我正在尝试将 spark 数据框的所有列更改为 double 类型,但我想知道是否有比仅循环列和强制转换更好的方法。
With this dataframe:
df = spark.createDataFrame(
[
(1,2),
(2,3),
],
["foo","bar"]
)
df.show()
+---+---+
|foo|bar|
+---+---+
| 1| 2|
| 2| 3|
+---+---+
the for
loop is problably the easiest and more natural solution.
from pyspark.sql import functions as F
for col in df.columns:
df = df.withColumn(
col,
F.col(col).cast("double")
)
df.show()
+---+---+
|foo|bar|
+---+---+
|1.0|2.0|
|2.0|3.0|
+---+---+
Of course, you can also use python comprehension:
df.select(
*(
F.col(col).cast("double").alias(col)
for col
in df.columns
)
).show()
+---+---+
|foo|bar|
+---+---+
|1.0|2.0|
|2.0|3.0|
+---+---+
If you have a lot of columns, the second solution is a little bit better.
First of all don't post a solution from PySpark on Spark questions. For beginners I find it really annoying. Not every implementation can be smoothly translated to Spark.
Suppose df if the DataFrame
import org.apache.spark.sql.Column
def func(column: Column) = column.cast(DoubleType)
val df2=df.select(df.columns.map(c=>func(col(c))): _*)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.