简体   繁体   中英

how to truncate values for multiple rows and columns in dataframe in spark scala

i have a dataframe df id ABCD 1 1.000234 2.3456 4.6789 7.6934 2 3.7643 4.2323 5.6342 8.567

I want to create another dataframe df1 with truncated values to 2 places after decimal

id  A    B    C    D
 1 1.00 2.35 4.68 7.70
 2 3.76 4.23 5.63 8.57 

can someone help me with the code as my dataframe consists of 70 columns and 10000 rows

This can be done very easily using the format_number function

val df = Seq(
    (1, 1.000234, 2.3456, 4.6789, 7.6934), 
    (2, 3.7643, 4.2323, 5.6342, 8.567)
    ).toDF("id", "A", "B", "C", "D")

df.show()

+---+--------+------+------+------+
| id|       A|     B|     C|     D|
+---+--------+------+------+------+
|  1|1.000234|2.3456|4.6789|7.6934|
|  2|  3.7643|4.2323|5.6342| 8.567|
+---+--------+------+------+------+

val df1 = df.select(col("id"), 
    format_number(col("A"), 2).as("A"), 
    format_number(col("B"), 2).as("B"), 
    format_number(col("C"), 2).as("C"), 
    format_number(col("D"), 2).as("D"))

df1.show()

+---+----+----+----+----+
| id|   A|   B|   C|   D|
+---+----+----+----+----+
|  1|1.00|2.35|4.68|7.69|
|  2|3.76|4.23|5.63|8.57|
+---+----+----+----+----+

This is one of the way for truncating values in a dataframe dynamically not the hardcored way

import org.apache.spark.sql.functions.round
val df1 = df.columns.foldLeft(df){(df,colName) =>df.withColumn(colName,round(col(colName),3))}

This worked for me

You can cast using DecimalType(3,2) by importing org.apache.spark.sql.types._

scala> val df = Seq(
     |     (1, 1.000234, 2.3456, 4.6789, 7.6934),
     |     (2, 3.7643, 4.2323, 5.6342, 8.567)
     |     ).toDF("id", "A", "B", "C", "D")
df: org.apache.spark.sql.DataFrame = [id: int, A: double ... 3 more fields]

scala> df.show()
+---+--------+------+------+------+
| id|       A|     B|     C|     D|
+---+--------+------+------+------+
|  1|1.000234|2.3456|4.6789|7.6934|
|  2|  3.7643|4.2323|5.6342| 8.567|
+---+--------+------+------+------+


scala> import org.apache.spark.sql.types._
import org.apache.spark.sql.types._

scala> val df2=df.columns.filter(_ !="id").foldLeft(df){ (acc,x) => acc.withColumn(x,col(x).cast(DecimalType(3,2))) }
df2: org.apache.spark.sql.DataFrame = [id: int, A: decimal(3,2) ... 3 more fields]

scala> df2.show(false)
+---+----+----+----+----+
|id |A   |B   |C   |D   |
+---+----+----+----+----+
|1  |1.00|2.35|4.68|7.69|
|2  |3.76|4.23|5.63|8.57|
+---+----+----+----+----+


scala>

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM