[英]R: Pattern-matching financial time-series data with 2 large data sets:
我的問題可能很復雜, 請耐心閱讀。
我正在處理以下情況,我有兩個交易所(紐約和倫敦)的金融時間序列的兩個時間數據集
這兩個數據集如下所示:
倫敦數據集:
Date time.second Price
2015-01-05 32417 238.2
2015-01-05 32418 238.2
2015-01-05 32421 238.2
2015-01-05 32422 238.2
2015-01-05 32423 238.2
2015-01-05 32425 238.2
2015-01-05 32427 238.2
2015-01-05 32431 238.2
2015-01-05 32435 238.47
2015-01-05 32436 238.47
紐約數據集:
NY.Date Time Price
2015-01-05 32416 1189.75
2015-01-05 32417 1189.665
2015-01-05 32418 1189.895
2015-01-05 32419 1190.15
2015-01-05 32420 1190.075
2015-01-05 32421 1190.01
2015-01-05 32422 1190.175
2015-01-05 32423 1190.12
2015-01-05 32424 1190.14
2015-01-05 32425 1190.205
2015-01-05 32426 1190.2
2015-01-05 32427 1190.33
2015-01-05 32428 1190.29
2015-01-05 32429 1190.28
2015-01-05 32430 1190.05
2015-01-05 32432 1190.04
可以看到,共有3列: 日期,時間(秒),價格
我想做的是使用倫敦數據集作為參考,找到紐約數據集中最近但較早的數據項。
我指的是哪個最近但更早 ? 我的意思是,例如
倫敦數據集中的“ 2015-01-01”,“ 21610”,“ 15.6871” ,我想在紐約數據集中找到數據,並且該數據在同一日期 , 最接近但更早或相等的時間出現 ,這將很有幫助看一下我當前的程序 :
# I am trying to avoid using for-loop
for(i in 1:dim(london_data)[1]){ #for each row in london data set
print(i)
tempRow<-london_data[i,]
dateMatch<-(which(NY_data[,1]==tempRow[1])) # select the same date
dataNeeded<-(london_before[dateMatch,]) # subset the same date data
# find the nearest but earlier data in NY_data set
Found<-dataNeeded[which(dataNeeded[,2]<=tempRow[2]),]
# Found may be more than one row, each row is of length 3
if(length(Found)>3)
{ # Select the data, we only need "time" and "price", 2nd and 3rd
# column
# the data is in the final row of **Found**
selected<-Found[dim(Found)[1],2:3]
if(length(selected)==0) # if nothing selected, just insert 0 and 0
temp[i,]<-c(0,0)
else
temp[i,]<-selected
}
else{ # Found may only one row, of length 3
temp[i,]<-Found[2:3] # just insert what we want
}
print(paste("time is", as.numeric(selected[1]))) #Monitor the loop
}
res<-cbind(london_data,temp)
colnames(res)<-c("LondonDate","LondonTime","LondonPrice","NYTime","NYPrice")
上面列出的數據集的正確輸出是**(僅部分)**:
"LondonDate","LondonTime","LondonPrice","NYTime","NYPrice"
[1,] "2015-01-05" "32417" "238.2" "32417" "1189.665"
[2,] "2015-01-05" "32418" "238.2" "32418" "1189.895"
[3,] "2015-01-05" "32421" "238.2" "32421" "1190.01"
[4,] "2015-01-05" "32422" "238.2" "32422" "1190.175"
[5,] "2015-01-05" "32423" "238.2" "32423" "1190.12"
[6,] "2015-01-05" "32425" "238.2" "32425" "1190.205"
[7,] "2015-01-05" "32427" "238.2" "32427" "1190.33"
[8,] "2015-01-05" "32431" "238.2" "32430" "1190.05"
[9,] "2015-01-05" "32435" "238.47" "32432" "1190.04"
[10,] "2015-01-05" "32436" "238.47" "32432" "1190.04"
我的問題是,倫敦數據集有超過5,000,000列 ,我試圖避免for循環,但我仍然需要至少一個以上程序才能成功運行,但花費了大約24小時 。
如何避免使用for循環並加速程序?
您的幫助將不勝感激。
解決方案是使用data.table
在@Jan Gorecki注釋的基礎上構建的,這是解決方案:
library(data.table)
df1 <- data.table(Date=rep("05/01/2015", 10),
time.second=c(32417, 32418, 32421, 32422, 32423, 32425, 32427, 32431, 32435, 32436),
Price=c(238.2, 238.2, 238.2, 238.2, 238.2, 238.2, 238.2, 238.2, 238.47, 238.47))
df2 <- data.table(NY.Date=rep("05/01/2015", 16),
Time=c(32416, 32417, 32418, 32419, 32420, 32421, 32422, 32423, 32424, 32425, 32426, 32427, 32428, 32429, 32430, 32432),
Price=c(1189.75, 1189.665, 1189.895, 1190.15, 1190.075, 1190.01, 1190.175, 1190.12, 1190.14, 1190.205, 1190.2, 1190.33, 1190.29, 1190.28, 1190.05, 1190.04))
setnames(df2, c("Date", "time.second", "NYPrice"))
setkey(df1,"Date", "time.second")
setkey(df2,"Date", "time.second")
df2[, NYTime:=time.second]
df3 <- df2[df1, roll=TRUE]
df3
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.