[英]how to remove partial duplicates from a data frame?
Data I'm importing describes numeric measurements taken at various locations for more or less evenly spread timestamps. 我导入的数据描述了在不同位置进行的数值测量,以获得或多或少均匀分布的时间戳。 sometimes this "evenly spread" is not really true and I have to discard some of the values, it's not that important which one, as long as I have one value for each timestamp for each location. 有时这种“均匀分布”并不是真的,我必须丢弃一些值,只要每个位置的每个时间戳都有一个值,那么哪个值并不重要。
what I do with the data? 我对数据做了什么? I add it to a result
data.frame. 我将它添加到result
data.frame。 There I have a timestamp
column and the values in the timestamp column, they are definitely evenly spaced according to the step
. 我有一个timestamp
列和timestamp
列中的值,它们根据step
明确均匀分布。
timestamps <- ceiling(as.numeric((timestamps-epoch)*24*60/step))*step*60 + epoch
result[result$timestamp %in% timestamps, columnName] <- values
This does NOT work when I have timestamps that fall in the same time step. 当我的时间戳落在同一时间步时,这不起作用。 This is an example: 这是一个例子:
> data.frame(ts=timestamps, v=values)
ts v
1 2009-09-30 10:00:00 -2.081609
2 2009-09-30 10:04:18 -2.079778
3 2009-09-30 10:07:47 -2.113531
4 2009-09-30 10:09:01 -2.124716
5 2009-09-30 10:15:00 -2.102117
6 2009-09-30 10:27:56 -2.093542
7 2009-09-30 10:30:00 -2.092626
8 2009-09-30 10:45:00 -2.086339
9 2009-09-30 11:00:00 -2.080144
> data.frame(ts=ceiling(as.numeric((timestamps-epoch)*24*60/step))*step*60+epoch,
+ v=values)
ts v
1 2009-09-30 10:00:00 -2.081609
2 2009-09-30 10:15:00 -2.079778
3 2009-09-30 10:15:00 -2.113531
4 2009-09-30 10:15:00 -2.124716
5 2009-09-30 10:15:00 -2.102117
6 2009-09-30 10:30:00 -2.093542
7 2009-09-30 10:30:00 -2.092626
8 2009-09-30 10:45:00 -2.086339
9 2009-09-30 11:00:00 -2.080144
in Python I would (mis)use a dictionary to achieve what I need: 在Python中我会(错)使用字典来实现我需要的东西:
dict(zip(timestamps, values)).items()
returns a list of pairs where the first coordinate is unique. 返回第一个坐标唯一的对列表。
in RI don't know how to do it in a compact and efficient way. 在RI不知道如何以紧凑和有效的方式做到这一点。
I would use subset
combined with duplicated
to filter non-unique timestamps in the second data frame: 我会使用subset
与duplicated
结合来过滤第二个数据帧中的非唯一时间戳:
R> df_ <- read.table(textConnection('
ts v
1 "2009-09-30 10:00:00" -2.081609
2 "2009-09-30 10:15:00" -2.079778
3 "2009-09-30 10:15:00" -2.113531
4 "2009-09-30 10:15:00" -2.124716
5 "2009-09-30 10:15:00" -2.102117
6 "2009-09-30 10:30:00" -2.093542
7 "2009-09-30 10:30:00" -2.092626
8 "2009-09-30 10:45:00" -2.086339
9 "2009-09-30 11:00:00" -2.080144
'), as.is=TRUE, header=TRUE)
R> subset(df_, !duplicated(ts))
ts v
1 2009-09-30 10:00:00 -2.082
2 2009-09-30 10:15:00 -2.080
6 2009-09-30 10:30:00 -2.094
8 2009-09-30 10:45:00 -2.086
9 2009-09-30 11:00:00 -2.080
Update: To select a specific value you can use aggregate
更新:要选择特定值,您可以使用aggregate
aggregate(df_$v, by=list(df_$ts), function(x) x[1]) # first value
aggregate(df_$v, by=list(df_$ts), function(x) tail(x, n=1)) # last value
aggregate(df_$v, by=list(df_$ts), function(x) max(x)) # max value
I think you are looking at data structures for time-indexed objects, and not for a dictionary. 我认为您正在查看时间索引对象的数据结构,而不是字典。 For the former, look at the zoo and xts packages which offer much better time-pased subsetting: 对于前者,请查看动物园和xts包,它们提供了更好的时间分组:
R> library(xts)
R> X <- xts(data.frame(val=rnorm(10)), \
order.by=Sys.time() + sort(runif(10,10,300)))
R> X
val
2009-11-20 07:06:17 -1.5564
2009-11-20 07:06:40 -0.2960
2009-11-20 07:07:50 -0.4123
2009-11-20 07:08:18 -1.5574
2009-11-20 07:08:45 -1.8846
2009-11-20 07:09:47 0.4550
2009-11-20 07:09:57 0.9598
2009-11-20 07:10:11 1.0018
2009-11-20 07:10:12 1.0747
2009-11-20 07:10:58 0.7062
R> X["2009-11-20 07:08::2009-11-20 07:09"]
val
2009-11-20 07:08:18 -1.5574
2009-11-20 07:08:45 -1.8846
2009-11-20 07:09:47 0.4550
2009-11-20 07:09:57 0.9598
R>
The X
object is ordered by a time sequence -- make sure it is of type POSIXct so you may need to parse your dates first. X
对象按时间顺序排序 - 确保它是POSIXct类型,因此您可能需要先解析日期。 Then we can just index for '7:08 to 7:09 on the give day'. 然后我们可以在给定日的'7:08到7:09'进行索引。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.