简体   繁体   中英

Pandas: Using Unix epoch timestamp as Datetime index

My application involves dealing with data (contained in a CSV) which is of the following form:

Epoch (number of seconds since Jan 1, 1970), Value
1368431149,20.3
1368431150,21.4
..

Currently i read the CSV using numpy loadtxt method (can easily use read_csv from Pandas). Currently for my series i am converting the timestamps field as follows:

timestamp_date=[datetime.datetime.fromtimestamp(timestamp_column[i]) for i in range(len(timestamp_column))]

I follow this by setting timestamp_date as the Datetime index for my DataFrame. I tried searching at several places to see if there is a quicker (inbuilt) way of using these Unix epoch timestamps, but could not find any. A lot of applications make use of such timestamp terminology.

  1. Is there an inbuilt method for handling such timestamp formats?
  2. If not, what is the recommended way of handling these formats?

Convert them to datetime64[s] :

np.array([1368431149, 1368431150]).astype('datetime64[s]')
# array([2013-05-13 07:45:49, 2013-05-13 07:45:50], dtype=datetime64[s])

You can also use pandas to_datetime :

df['datetime'] = pd.to_datetime(df["timestamp"], unit='s')

This method requires Pandas 0.18 or later.

You can also use Pandas DatetimeIndex like so

pd.DatetimeIndex(df['timestamp']*10**9)

the *10**9 puts it into the format it's expecting for such timestamps.

This is nice since it allows you to use functions such as .date() or .tz_localize() on the series.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM