[英]How to create a Spark dataframe with timestamp?
How can I create this Spark dataframe with timestamp data type in one step using python?如何使用 python 一步创建具有时间戳数据类型的 Spark 数据帧? Here is how I do it in two steps.
这是我分两步完成的方法。 Using spark 3.1.2
使用火花 3.1.2
from pyspark.sql.functions import *
from pyspark.sql.types import *
schema_sdf = StructType([
StructField("ts", TimestampType(), True),
StructField("myColumn", LongType(), True),
])
sdf = spark.createDataFrame( ( [ ( to_timestamp(lit("2022-06-29 12:01:19.000")), 0 ) ] ), schema=schema_sdf )
PySpark does not automatically interpret timestamp values from strings. PySpark 不会自动解释字符串中的时间戳值。 I mostly use the following syntax to create the df and then to
cast
column type to timestamp:我主要使用以下语法来创建 df,然后
cast
列类型转换为时间戳:
from pyspark.sql import functions as F
sdf = spark.createDataFrame([("2022-06-29 12:01:19.000", 0 )], ["ts", "myColumn"])
sdf = sdf.withColumn("ts", F.col("ts").cast("timestamp"))
sdf.printSchema()
# root
# |-- ts: timestamp (nullable = true)
# |-- myColumn: long (nullable = true)
Long format was automatically inferred, but for timestamp we needed a cast
.长格式是自动推断出来的,但是对于时间戳,我们需要一个
cast
。
On the other hand, even without casting, you are able to use functions which need timestamp as input:另一方面,即使没有强制转换,您也可以使用需要时间戳作为输入的函数:
sdf = spark.createDataFrame([("2022-06-29 12:01:19.000", 0 )], ["ts", "myColumn"])
sdf.printSchema()
# root
# |-- ts: string (nullable = true)
# |-- myColumn: long (nullable = true)
sdf.selectExpr("extract(year from ts)").show()
# +---------------------+
# |extract(year FROM ts)|
# +---------------------+
# | 2022|
# +---------------------+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.