简体   繁体   中英

Date format in Pyspark Dataframe

My data:

[Row(ID=2887628, Date_Time='11/01/2019 05:00:00 PM'),

My code:

from pyspark.sql import functions as F
df = df.withColumn('Date_Time',F.to_date(F.unix_timestamp('Date_Time', 'MM/dd/yyyy HH:mm:ss a').cast('timestamp')))

But the answer of Date_Time is wrong:

[Row(ID=2887628, Date_Time=None),

What's the problem here?

Use lower case h instead of upper case. See the docs for correct datetime pattern usage.

df2 = df.withColumn(
    'Date_Time',
    F.to_date(F.unix_timestamp('Date_Time', 'MM/dd/yyyy hh:mm:ss a').cast('timestamp'))
)

Your code can also be simplified. No need for using unix_timestamp .

df2 = df.withColumn(
    'Date_Time',
    F.to_date('Date_Time', 'MM/dd/yyyy hh:mm:ss a')
)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM