简体   繁体   中英

How to update rows in spark dataframe based on condition

I am trying to update some rows of dataframe,below is my code.

dfs_ids1 = dfs_ids1.withColumn("arrival_dt", F.when(F.col("arrival_dt")=='1960-01-01', lit(None)) )

Basically, I want to update all the rows where arrival_dt is 1960-01-01 with null and leave rest of the rows unchanged .

You need to understand the filter and when functions.

If you want to fetch rows only without caring about others, try this.

from pyspark.sql.functions import *

dfs_ids1 = dfs_ids1.filter(col("arrival_dt='1960-01-01'"))

If you want to update remaining with custom value or other columns.

dfs_ids1=dfs_ids1.withColumn("arrival_dt",when(col("arrival_dt")=="1960-01-01",col("arrival_dt")).otherwise(lit(None))) 

//Or

dfs_ids1=dfs_ids1.withColumn("arrival_dt",when(col("arrival_dt")=="1960-01-01",col("arrival_dt")))

//Sample example

//Input df

+------+-------+-----+
|  name|   city|state|
+------+-------+-----+
| manoj|gwalior|   mp|
| kumar|  delhi|delhi|
|dhakad|chennai|   tn|
+------+-------+-----+

from pyspark.sql.functions import *
opOneDf=df.withColumn("name",when(col("city")=="delhi",col("city")).otherwise(lit(None)))
opOneDf.show()

//Sample output

+-----+-------+-----+
| name|   city|state|
+-----+-------+-----+
| null|gwalior|   mp|
|delhi|  delhi|delhi|
| null|chennai|   tn|
+-----+-------+-----+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM