[英]Remove duplicates based on specific criteria
I have a dataset that looks something like this:我有一个看起来像这样的数据集:
df <- structure(list(Claim.Num = c(500L, 500L, 600L, 600L, 700L, 700L,
100L, 200L, 300L), Amount = c(NA, 1000L, NA, 564L, 0L, 200L,
NA, 0L, NA), Company = structure(c(NA, 1L, NA, 4L, 2L, 3L, NA,
3L, NA), .Label = c("ATT", "Boeing", "Petco", "T Mobile"), class = "factor")), .Names =
c("Claim.Num", "Amount", "Company"), class = "data.frame", row.names = c(NA,
-9L))
I want to remove duplicate rows based on Claim Num values, but to remove duplicates based on the following criteria: df$Company == 'NA' | df$Amount == 0
我想根据 Claim Num 值删除重复行,但要根据以下条件删除重复行:
df$Company == 'NA' | df$Amount == 0
df$Company == 'NA' | df$Amount == 0
In other words, remove records 1, 3, and 5.换句话说,删除记录 1、3 和 5。
I've gotten this far: df <- df[!duplicated(df$Claim.Num[which(df$Amount = 0 | df$Company == 'NA')]),]
我已经走了这么远:
df <- df[!duplicated(df$Claim.Num[which(df$Amount = 0 | df$Company == 'NA')]),]
The code runs without errors, but doesn't actually remove duplicate rows based on the required criteria.代码运行没有错误,但实际上并没有根据所需的条件删除重复的行。 I think that's because I'm telling it to remove any duplicate Claim Nums which match to those criteria, but not to remove any duplicate
Claim.Num
but treat certain Amounts & Companies preferentially for removal.我认为这是因为我告诉它删除任何符合这些标准的重复
Claim.Num
,但不删除任何重复的Claim.Num
而是优先处理某些Claim.Num
& Companies 以进行删除。 Please note that, I can't simple filter out the dataset based on specified values, as there are other records that may have 0 or NA values, that require inclusion (eg records 8 & 9 shouldn't be excluded because their Claim.Nums are not duplicated).请注意,我不能根据指定的值简单地过滤掉数据集,因为还有其他可能具有 0 或 NA 值的记录需要包含在内(例如,不应排除记录 8 和 9,因为它们的 Claim.Nums不重复)。
If you order your data frame first, then you can make sure duplicated
keeps the ones you want:如果您先订购您的数据框,那么您可以确保
duplicated
保留您想要的:
df.tmp <- with(df, df[order(ifelse(is.na(Company) | Amount == 0, 1, 0)), ])
df.tmp[!duplicated(df.tmp$Claim.Num), ]
# Claim.Num Amount Company
# 2 500 1000 ATT
# 4 600 564 T Mobile
# 6 700 200 Petco
# 7 100 NA <NA>
# 8 200 0 Petco
# 9 300 NA <NA>
Slightly different approach略有不同的做法
r <- merge(df,
aggregate(df$Amount,by=list(Claim.Num=df$Claim.Num),length),
by="Claim.Num")
result <-r[!(r$x>1 & (is.na(r$Company) | (r$Amount==0))),-ncol(r)]
result
# Claim.Num Amount Company
# 1 100 NA <NA>
# 2 200 0 Petco
# 3 300 NA <NA>
# 5 500 1000 ATT
# 7 600 564 T Mobile
# 9 700 200 Petco
This adds a column x
to indicate which rows have Claim.Num
present more than once, then filters the result based on your criteria.这会添加一个列
x
以指示哪些行具有Claim.Num
出现,然后根据您的条件过滤结果。 The use of -ncol(r)
just removes the column x
at the end.使用
-ncol(r)
只会删除最后的x
列。
Another way based on subset
and logical indices:另一种基于
subset
和逻辑索引的方法:
subset(dat, !(duplicated(Claim.Num) | duplicated(Claim.Num, fromLast = TRUE)) |
(!is.na(Amount) & Amount))
Claim.Num Amount Company
2 500 1000 ATT
4 600 564 T Mobile
6 700 200 Petco
7 100 NA <NA>
8 200 0 Petco
9 300 NA <NA>
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.