简体   繁体   中英

Spark select column based on row values

I have a all string spark dataframe and I need to return columns in which all rows meet a certain criteria.

scala> val df = spark.read.format("csv").option("delimiter",",").option("header", "true").option("inferSchema", "true").load("file:///home/animals.csv")

df.show()
+--------+---------+--------+
|Column 1| Column 2|Column 3|
+--------+---------+--------+
|(ani)mal|   donkey|    wolf|
|  mammal|(mam)-mal|  animal|
| chi-mps|   chimps|    goat|
+--------+---------+--------+

Over here the criteria is return columns where all row values have length==6 , irrespective of special characters. The response should be below dataframe since all rows in column 1 and column 2 have length==6

+--------+---------+
|Column 1| Column 2|
+--------+---------+
|(ani)mal|   donkey|
|  mammal|(mam)-mal|
| chi-mps|   chimps|
+--------+---------+

You can use regexp_replace to delete the special characters if you know what there are and then get the length, filter to field what you want.

val cols = df.columns
val df2 = cols.foldLeft(df) {
    (df, c) => df.withColumn(c + "_len", length(regexp_replace(col(c), "[()-]", "")))
}

df2.show()

+--------+---------+-------+-----------+-----------+-----------+
| Column1|  Column2|Column3|Column1_len|Column2_len|Column3_len|
+--------+---------+-------+-----------+-----------+-----------+
|(ani)mal|   donkey|   wolf|          6|          6|          4|
|  mammal|(mam)-mal| animal|          6|          6|          6|
| chi-mps|   chimps|   goat|          6|          6|          4|
+--------+---------+-------+-----------+-----------+-----------+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM