[英]How to convert a string column with milliseconds to a timestamp with milliseconds in Spark 2.1 using Scala?
I am using Spark 2.1 with Scala.我在 Scala 中使用 Spark 2.1。
How to convert a string column with milliseconds to a timestamp with milliseconds?如何将毫秒的字符串列转换为毫秒的时间戳?
I tried the following code from the question Better way to convert a string field into timestamp in Spark我从问题Better way to convert a string field into timestamp in Spark 中尝试了以下代码
import org.apache.spark.sql.functions.unix_timestamp
val tdf = Seq((1L, "05/26/2016 01:01:01.601"), (2L, "#$@#@#")).toDF("id", "dts")
val tts = unix_timestamp($"dts", "MM/dd/yyyy HH:mm:ss.SSS").cast("timestamp")
tdf.withColumn("ts", tts).show(2, false)
But I get the result without milliseconds:但我得到的结果没有毫秒:
+---+-----------------------+---------------------+
|id |dts |ts |
+---+-----------------------+---------------------+
|1 |05/26/2016 01:01:01.601|2016-05-26 01:01:01.0|
|2 |#$@#@# |null |
+---+-----------------------+---------------------+
UDF with SimpleDateFormat works.带有 SimpleDateFormat 的 UDF 有效。 The idea is taken from the Ram Ghadiyaram's link to an UDF logic .
这个想法取自 Ram Ghadiyaram 与 UDF逻辑的链接。
import java.text.SimpleDateFormat
import java.sql.Timestamp
import org.apache.spark.sql.functions.udf
import scala.util.{Try, Success, Failure}
val getTimestamp: (String => Option[Timestamp]) = s => s match {
case "" => None
case _ => {
val format = new SimpleDateFormat("MM/dd/yyyy' 'HH:mm:ss.SSS")
Try(new Timestamp(format.parse(s).getTime)) match {
case Success(t) => Some(t)
case Failure(_) => None
}
}
}
val getTimestampUDF = udf(getTimestamp)
val tdf = Seq((1L, "05/26/2016 01:01:01.601"), (2L, "#$@#@#")).toDF("id", "dts")
val tts = getTimestampUDF($"dts")
tdf.withColumn("ts", tts).show(2, false)
with output:带输出:
+---+-----------------------+-----------------------+
|id |dts |ts |
+---+-----------------------+-----------------------+
|1 |05/26/2016 01:01:01.601|2016-05-26 01:01:01.601|
|2 |#$@#@# |null |
+---+-----------------------+-----------------------+
There is an easier way than making a UDF.有一种比制作 UDF 更简单的方法。 Just parse the millisecond data and add it to the unix timestamp (the following code works with pyspark and should be very close the scala equivalent):
只需解析毫秒数据并将其添加到 unix 时间戳(以下代码适用于 pyspark,并且应该非常接近 Scala 等效项):
timeFmt = "yyyy/MM/dd HH:mm:ss.SSS"
df = df.withColumn('ux_t', unix_timestamp(df.t, format=timeFmt) + substring(df.t, -3, 3).cast('float')/1000)
Result: '2017/03/05 14:02:41.865' is converted to 1488722561.865结果:'2017/03/05 14:02:41.865' 转换为 1488722561.865
import org.apache.spark.sql.functions;
import org.apache.spark.sql.types.DataTypes;
dataFrame.withColumn(
"time_stamp",
dataFrame.col("milliseconds_in_string")
.cast(DataTypes.LongType)
.cast(DataTypes.TimestampType)
)
the code is in java and it is easy to convert to scala代码在java中,很容易转换为scala
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.