[英]Timeseries Resampling
我有以下表單保管箱下載的數據集(23kb csv)
在某些情況下,數據的采樣率從0Hz到200Hz以上每秒變化一次,在提供的數據集中,最高采樣率約為每秒50個采樣。
例如,當取樣時,它們總是均勻分布在第二個樣品上
time x
2012-12-06 21:12:40 128.75909883327378
2012-12-06 21:12:40 32.799224301545976
2012-12-06 21:12:40 98.932953779777989
2012-12-06 21:12:43 132.07033814856786
2012-12-06 21:12:43 132.07033814856786
2012-12-06 21:12:43 65.71691352191452
2012-12-06 21:12:44 117.1350194748169
2012-12-06 21:12:45 13.095622561808861
2012-12-06 21:12:47 61.295242676059246
2012-12-06 21:12:48 94.774064119961352
2012-12-06 21:12:49 80.169378222553533
2012-12-06 21:12:49 80.291142695702533
2012-12-06 21:12:49 136.55650749231367
2012-12-06 21:12:49 127.29790925838365
應該
time x
2012-12-06 21:12:40 000ms 128.75909883327378
2012-12-06 21:12:40 333ms 32.799224301545976
2012-12-06 21:12:40 666ms 98.932953779777989
2012-12-06 21:12:43 000ms 132.07033814856786
2012-12-06 21:12:43 333ms 132.07033814856786
2012-12-06 21:12:43 666ms 65.71691352191452
2012-12-06 21:12:44 000ms 117.1350194748169
2012-12-06 21:12:45 000ms 13.095622561808861
2012-12-06 21:12:47 000ms 61.295242676059246
2012-12-06 21:12:48 000ms 94.774064119961352
2012-12-06 21:12:49 000ms 80.169378222553533
2012-12-06 21:12:49 250ms 80.291142695702533
2012-12-06 21:12:49 500ms 136.55650749231367
2012-12-06 21:12:49 750ms 127.29790925838365
有沒有一種簡單的方法可以使用熊貓時間序列重采樣功能,或者在numpy或scipy中內置了某些可以起作用的東西?
我認為沒有內置的熊貓或numpy方法/功能可以做到這一點。
但是,我更喜歡使用python生成器:
def repeats(lst):
i_0 = None
n = -1 # will still work if lst starts with None
for i in lst:
if i == i_0:
n += 1
else:
n = 0
yield n
i_0 = i
# list(repeats([1,1,1,2,2,3])) == [0,1,2,0,1,0]
然后,您可以將此生成器放入numpy數組中 :
import numpy as np
df['rep'] = np.array(list(repeats(df['time'])))
計算重復次數:
from collections import Counter
count = Counter(df['time'])
df['count'] = df['time'].apply(lambda x: count[x])
並進行計算(這是計算中最昂貴的部分):
df['time2'] = df.apply(lambda row: (row['time']
+ datetime.timedelta(0, 1) # 1s
* row['rep']
/ row['count']),
axis=1)
注意:要刪除計算列,請使用del df['rep']
和del df['count']
。
。
一種完成此操作的“內置”方法可以使用兩次shift
來完成,但是我認為這會有些混亂。
我發現這是pandas groupby機制的絕佳用例,因此我也想為此提供解決方案。 我發現它比Andy的解決方案更具可讀性,但實際上並沒有那么短。
# First, get your data into a dataframe after having copied
# it with the mouse into a multi-line string:
import pandas as pd
from StringIO import StringIO
s = """2012-12-06 21:12:40 128.75909883327378
2012-12-06 21:12:40 32.799224301545976
2012-12-06 21:12:40 98.932953779777989
2012-12-06 21:12:43 132.07033814856786
2012-12-06 21:12:43 132.07033814856786
2012-12-06 21:12:43 65.71691352191452
2012-12-06 21:12:44 117.1350194748169
2012-12-06 21:12:45 13.095622561808861
2012-12-06 21:12:47 61.295242676059246
2012-12-06 21:12:48 94.774064119961352
2012-12-06 21:12:49 80.169378222553533
2012-12-06 21:12:49 80.291142695702533
2012-12-06 21:12:49 136.55650749231367
2012-12-06 21:12:49 127.29790925838365"""
sio = StringIO(s)
df = pd.io.parsers.read_csv(sio, parse_dates=[[0,1]], sep='\s*', header=None)
df = df.set_index('0_1')
df.index.name = 'time'
df.columns = ['x']
到目前為止,這僅僅是數據准備,因此,如果您想比較解決方案的長度,請從現在開始! ;)
# Now, groupby the same time indices:
grouped = df.groupby(df.index)
# Create yourself a second object
from datetime import timedelta
second = timedelta(seconds=1)
# loop over group elements, catch new index parts in list
l = []
for _,group in grouped:
size = len(group)
if size == 1:
# go to pydatetime for later addition, so that list is all in 1 format
l.append(group.index.to_pydatetime())
else:
offsets = [i * second / size for i in range(size)]
l.append(group.index.to_pydatetime() + offsets)
# exchange index for new index
import numpy as np
df.index = pd.DatetimeIndex(np.concatenate(l))
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.