[英]Equivalent of R data.table rolling join in Python and PySpark
有誰知道如何在PySpark中進行R data.table滾動連接?
借用的例子,滾動距離Ben加入的很好解釋在這里 ;
sales<-data.table(saleID=c("S1","S2","S3","S4","S5"),
saleDate=as.Date(c("2014-2-20","2014-5-1","2014-6-15","2014-7- 1","2014-12-31")))
commercials<-data.table(commercialID=c("C1","C2","C3","C4"),
commercialDate=as.Date(c("2014-1-1","2014-4-1","2014-7-1","2014-9-15")))
setkey(sales,"saleDate")
setkey(commercials,"commercialDate")
sales[commercials, roll=TRUE]
結果是;
saleDate saleID commercialID
1: 2014-01-01 NA C1
2: 2014-04-01 S1 C2
3: 2014-07-01 S4 C3
4: 2014-09-15 S4 C4
非常感謝您的幫助。
join
+ fillna
首先滾動的加入是不一樣的join
和fillna
! 只有當連接的表的鍵(在data.table方面,即左表和右連接)在主表中具有等價物時才會出現這種情況。 data.table滾動連接不需要這樣。
據我所知,沒有直接的等價物,我搜索了很長一段時間。 甚至還有一個問題https://github.com/pandas-dev/pandas/issues/7546 。 然而:
大熊貓有一個解決方案。 我們假設您的右側data.table是表A,而您的左側data.table是表B.
tag
,其全部為0,列tag
為B,均為1。 tag
之外的所有列(可以省略,但這樣更清楚)並調用表B'。 保持B作為原創 - 我們將在以后需要它。 C = C.assign(groupNr = np.cumsum(C.tag))
一個新的cumsum列C = C.assign(groupNr = np.cumsum(C.tag))
tag
上使用過濾( query
)去除所有B'行。 groupNr
添加到原始B(整數從0到N-1或從1到N,具體取決於您是要進行前向還是后向滾動連接)。 groupNr
上加入B和C. #0. 'date' is the key for the rolling join. It does not have to be a date.
A = pd.DataFrame.from_dict(
{'date': pd.to_datetime(["2014-3-1", "2014-5-1", "2014-6-1", "2014-7-1", "2014-12-1"]),
'value': ["a1", "a2", "a3", "a4", "a5"]})
B = pd.DataFrame.from_dict(
{'date': pd.to_datetime(["2014-1-15", "2014-3-15", "2014-6-15", "2014-8-15", "2014-11-15", "2014-12-15"]),
'value': ["b1", "b2", "b3", "b4", "b5", "b6"]})
#1. Sort the table A and and B each by key.
A = A.sort_values('date')
B = B.sort_values('date')
#2. Add a column tag to A which are all 0 and a column tag to B that are all 1.
A['tag'] = 0
B['tag'] = 1
#3. Delete all columns except the key and tagfrom B (can be omitted, but it is clearer this way) and call the table B'. Keep B as an original - we are going to need it later.
B_ = B[['date','tag']] # You need two [], because you get a series otherwise.
#4. Concatenate A with B' to C and ignore the fact that the rows from B' has many NAs.
C = pd.concat([A, B_])
#5. Sort C by key.
C = C.sort_values('date')
#6. Make a new cumsum column with C = C.assign(groupNr = np.cumsum(C.tag))
C = C.assign(groupNr = np.cumsum(C.tag))
#7. Using filtering (query) on tag get rid of all B'-rows.
C = C[C.tag == 0]
#8. Add a running counter column groupNr to the original B (integers from 0 to N-1 or from 1 to N, depending on whether you want forward or backward rolling join).
B['groupNr'] = range(len(B)+1)[1:] # B's values are carried forward to A's values
B['groupNr'] = range(len(B)) # B's values are carried backward to A's values
#9. Join B with C on groupNr to D.
D = C.set_index('groupNr').join(B.set_index('groupNr'), lsuffix='_A', rsuffix='_B')
我也有類似的問題,用pandas.merge_asof解決了。
以下是暴露案例的快速解決方案:
sales = pd.DataFrame.from_dict(
{'saleDate': pd.to_datetime(["2014-02-20","2014-05-01","2014-06-15","2014-07-01","2014-12-31"]),
'saleID': ["S1","S2","S3","S4","S5"]})
commercials = pd.DataFrame.from_dict(
{'commercialDate': pd.to_datetime(["2014-01-01","2014-04-01","2014-07-01","2014-09-15"]),
'commercialID': ["C1","C2","C3","C4"]}
result = pd.merge_asof(commercials,
sales,
left_on='commercialDate',
right_on='saleDate')
# Ordering for easier comparison
result = result[['commercialDate','saleID','commercialID' ]]
結果與預期相同:
commercialDate saleID commercialID
0 2014-01-01 NaN C1
1 2014-04-01 S1 C2
2 2014-07-01 S4 C3
3 2014-09-15 S4 C4
這可能是一個更簡單的解決方案。
sales.asfreq("D",method="ffill").join(commercials,how="outer").dropna(subset= ["commercialID"])
我在https://gormanalysis.com/r-data-table-rolling-joins/的第一個例子中對此進行了測試,它確實有效。 類似的方法可以用於其他滾動連接。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.