[英]Error When Converting Pandas DataFrame with Dates to Spark Dataframe
請參閱以下最小示例:
import datetime
#this works
datetime.datetime(1973,1,23,0).timestamp()
#this produces OSError: [Errno 22] Invalid argument
datetime.datetime(1953,1,23,0).timestamp()
當我將一個帶有 datetime64[ns] 日期的 Pandas 數據幀轉換為 Apache Spark 數據幀時,我收到了一堆關於Exception ignored in: 'pandas._libs.tslibs.tzconversion._tz_convert_tzlocal_utc'
的警告Exception ignored in: 'pandas._libs.tslibs.tzconversion._tz_convert_tzlocal_utc'
(完整的堆棧跟蹤如下) 和紀元前的日期更改為紀元。 為什么會發生這種情況,我該如何預防?
Windows 10 Python:3.7.6 pyspark 2.4.5 熊貓 1.0.1
#imports
import pandas as pd
from datetime import datetime
from pyspark.sql import SparkSession
#set up spark
spark = SparkSession.builder.getOrCreate()
#create dataframe
df = pd.DataFrame({'Dates': [datetime(2019,3,29), datetime(1953,2,20)]})
#data types
df.dtypes
"""
Result:
Dates datetime64[ns]
dtype: object
"""
#try to convert to spark
sparkdf = spark.createDataFrame(df)
Exception ignored in: 'pandas._libs.tslibs.tzconversion._tz_convert_tzlocal_utc'
Traceback (most recent call last):
File "C:\Users\jbishop\AppData\Roaming\Python\Python37\site-packages\dateutil\tz\_common.py", line 144, in fromutc
return f(self, dt)
File "C:\Users\jbishop\AppData\Roaming\Python\Python37\site-packages\dateutil\tz\_common.py", line 258, in fromutc
dt_wall = self._fromutc(dt)
File "C:\Users\jbishop\AppData\Roaming\Python\Python37\site-packages\dateutil\tz\_common.py", line 222, in _fromutc
dtoff = dt.utcoffset()
File "C:\Users\jbishop\AppData\Roaming\Python\Python37\site-packages\dateutil\tz\tz.py", line 222, in utcoffset
if self._isdst(dt):
File "C:\Users\jbishop\AppData\Roaming\Python\Python37\site-packages\dateutil\tz\tz.py", line 291, in _isdst
dstval = self._naive_is_dst(dt)
File "C:\Users\jbishop\AppData\Roaming\Python\Python37\site-packages\dateutil\tz\tz.py", line 260, in _naive_is_dst
return time.localtime(timestamp + time.timezone).tm_isdst
OSError: [Errno 22] Invalid argument
sparkdf.show()
+-------------------+
| Dates|
+-------------------+
|2019-03-29 00:00:00|
|1970-01-01 00:00:00|
+-------------------+
sparkdf.printSchema()
root
|-- Dates: timestamp (nullable = true)
這不是答案,但可以說明情況。 甚至接近 1970 01 01 00:00 的日期也會給我 Errno 22。(我希望解決一些非常早的時代,因為數據在 NTP 時間可用之前記錄。)按預期工作的最早日期是 1970 01 01 左右17 00
>>> import datetime
>>> a3 = datetime.datetime(1970,1,1,17,59)
>>> a1 = datetime.datetime(1970,1,1,17,0)
>>> a0 = datetime.datetime(1970,1,1,16,59)
>>> a3.timestamp()
89940.0
>>> a1.timestamp()
86400.0
>>> a0.timestamp()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 22] Invalid argument
>>>
我的計算機處於 MDT 或 GMT - 6 時區(夏令時 MST)
Python 新手和第一次回答作者! Windows 10。Python 3.7(64 位)。 附近的 Anaconda 但這里沒有使用。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.