![](/img/trans.png)
[英]how to identify digital chars as date from a string column in spark dataframe
[英]In Apache Spark, I have a dataframe with one column which has string (its a date) but leading zero is missing from month and day
import org.apache.spark.sql.functions.regexp_replace
val df = spark.createDataFrame(Seq(
(1, "9/11/2020"),
(2, "10/11/2020"),
(3, "1/1/2020"),
(4, "12/7/2020"))).toDF("Id", "x4")
val newDf = df
.withColumn("x4New", regexp_replace(df("x4"), "(?:(\\d{2}))/(?:(\\d{1}))/(?:(\\d{4}))", "$1/0$2/$3"))
val newDf1 = newDf
.withColumn("x4New1", regexp_replace(df("x4"), "(?:(\\d{1}))/(?:(\\d{1}))/(?:(\\d{4}))", "0$1/0$2/$3"))
.withColumn("x4New2", regexp_replace(df("x4"), "(?:(\\d{1}))/(?:(\\d{2}))/(?:(\\d{4}))", "0$1/$2/$3"))
newDf1.show
Output now
+---+----------+----------+-----------+-----------+
| Id| x4| x4New| x4New1| x4New2|
+---+----------+----------+-----------+-----------+
| 1| 9/11/2020| 9/11/2020| 9/11/2020| 09/11/2020|
| 2|10/11/2020|10/11/2020| 10/11/2020|100/11/2020|
| 3| 1/1/2020| 1/1/2020| 01/01/2020| 1/1/2020|
| 4| 12/7/2020|12/07/2020|102/07/2020| 12/7/2020|
+---+----------+----------+-----------+-----------+
`Desired Output,在日或月前添加前導零是個位數'出於性能原因不想使用 UDF
+---+----------+----------+
| Id| x4| date |
+---+----------+----------+
| 1| 9/11/2020|09/11/2020|
| 2|10/11/2020|10/11/2020|
| 3| 1/1/2020|01/01/2020|
| 4| 12/7/2020|12/07/2020|
+---+----------+----------+-----------+-----------+
在內置函數中使用from_unixtime,unix_timestamp
(or) date_format,to_timestamp,(or) to_date
。
Example:(In Spark-2.4)
import org.apache.spark.sql.functions._
//sample data
val df = spark.createDataFrame(Seq((1, "9/11/2020"),(2, "10/11/2020"),(3, "1/1/2020"), (4, "12/7/2020"))).toDF("Id", "x4")
//using from_unixtime
df.withColumn("date",from_unixtime(unix_timestamp(col("x4"),"MM/dd/yyyy"),"MM/dd/yyyy")).show()
//using date_format
df.withColumn("date",date_format(to_timestamp(col("x4"),"MM/dd/yyyy"),"MM/dd/yyyy")).show()
df.withColumn("date",date_format(to_date(col("x4"),"MM/dd/yyyy"),"MM/dd/yyyy")).show()
//+---+----------+----------+
//| Id| x4| date|
//+---+----------+----------+
//| 1| 9/11/2020|09/11/2020|
//| 2|10/11/2020|10/11/2020|
//| 3| 1/1/2020|01/01/2020|
//| 4| 12/7/2020|12/07/2020|
//+---+----------+----------+
`找到一個解決方法,看看是否有更好的解決方案,使用一個 dataframe 而沒有 UDF'
import org.apache.spark.sql.functions.regexp_replace
val df = spark.createDataFrame(Seq(
(1, "9/11/2020"),
(2, "10/11/2020"),
(3, "1/1/2020"),
(4, "12/7/2020"))).toDF("Id", "x4")
val newDf = df.withColumn("x4New", regexp_replace(df("x4"), "(?:(\\b\\d{2}))/(?:(\\d))/(?:(\\d{4})\\b)", "$1/0$2/$3"))
val newDf1 = newDf.withColumn("x4New1", regexp_replace(newDf("x4New"), "(?:(\\b\\d{1}))/(?:(\\d))/(?:(\\d{4})\\b)", "0$1/$2/$3"))
val newDf2 = newDf1.withColumn("x4New2", regexp_replace(newDf1("x4New1"), "(?:(\\b\\d{1}))/(?:(\\d{2}))/(?:(\\d{4})\\b)", "0$1/$2/$3"))
val newDf3 = newDf2.withColumn("date", to_date(regexp_replace(newDf2("x4New2"), "(?:(\\b\\d{2}))/(?:(\\d{1}))/(?:(\\d{4})\\b)", "$1/0$2/$3"),"MM/dd/yyyy"))
val formatedDataDf = newDf3
.drop("x4New")
.drop("x4New1")
.drop("x4New2")
formatedDataDf.printSchema
formatedDataDf.show
Output looks like as follows
root
|-- Id: integer (nullable = false)
|-- x4: string (nullable = true)
|-- date: date (nullable = true)
+---+----------+----------+
| Id| x4| date|
+---+----------+----------+
| 1| 9/11/2020|2020-09-11|
| 2|10/11/2020|2020-10-11|
| 3| 1/1/2020|2020-01-01|
| 4| 12/7/2020|2020-12-07|
+---+----------+----------+
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.