简体   繁体   中英

Group dataframes by columns and match by n elements

So here is my issue. I have two dataframes. A simplified version of them is below.

df1
ID         String
1.1        a
1.1        a
1.1        b
1.1        c
...
1.2        a 
1.2        a
1.2        c
1.2        c
...
2.1        a
2.1        n
2.1        o
2.1        o
...
2.2        a
2.2        n
2.2        n
2.2        o
...
3.1        a
3.1        a
3.1        x
3.1        x
...
3.2        a
3.2        x
3.2        a
3.2        x
...
4.1        a
4.1        b
4.1        o
4.1        o
... 
4.2        a
4.2        b
4.2        b
4.2        o

Imagine each ID (ex: 1.1) has over 1000 rows. Another thing to take note is that in the cases of IDs with same number (ex: 1.1 and 1.2) are very similar. But not an exact match to one another.

df2
string2
a
b
a
c

The df2 is a test df.

I want to see which of the df1 ID is the closest match to df2. But I have one very important condition. I want to match by n elements. Not the whole dataframe against the other.

My pseudo code for this:

df2-elements-to-match <- df2$string2[1:n] #only the first n elements

group df1 by ID

df1-elements-to-match <- df1$String[1:n of every ID] #only the first n elements of each ID

Output a column with score of how many matches. 

Filter df1 to remove ID groups with < m score. #m here could be any number. 

Filtered df1 becomes new df1. 

n <- n+1 

df2-elements-to-match and df1-elements-to-match both slide down to the next n elements. Overlap is optional. (ex: if first was 1:2, then 3:4 or even 2:3 and then 3:4)

Reiterate loop with updated variables

If one ID remains stop loop.

The idea here is to get a predicted match without having to match the whole test dataframe.

## minimal dfs
df1 <- data.frame(ID=c(rep(1.1, 5),
                       rep(1.2, 6),
                       rep(1.3, 3)),
                  str=unlist(strsplit("aabaaaabcababc", "")), stringsAsFactors=F)

df2 <- data.frame(str=c("a", "b", "a", "b"), stringsAsFactors=F)


## functions

distance <- function(df, query.df, df.col, query.df.col) {
  deviating <- df[, df.col] != query.df[, query.df.col]
    sum(deviating, na.rm=TRUE) # if too few rows, there will be NA, ignore NA
}

distances <- function(dfs, query.df, dfs.col, query.df.col) {
  sapply(dfs, function(df) distance(df, query.df, dfs.col, query.df.col))
}

orderedDistances <- function(dfs, query.df, dfs.col, query.df.col) {
  dists <- distances(dfs, query.df, dfs.col, query.df.col)
  sort(dists)
}

orderByDistance <- function(dfs, query.df, dfs.col, query.df.col, dfs.split.col) {
  dfs.split <- split(dfs, dfs[, dfs.split.col])
  dfs.split.N <- lapply(dfs.split, function(df) df[1:nrow(query.df), ])
  orderedDistances(dfs.split.N, query.df, dfs.col, query.df.col)
}


orderByDistance(df1, df2, "str", "str", "ID")
# 1.3 1.1 1.2 
#   1   3   3 

# 1.3 is the closest to df2!

Your problem is kind of a Distance problem. Minimalizing Distance = finding most similar sequence.

This kind of distance I show here, assumes that at equivalent positions between df2 and sub-df of df1, deviations are counted as 1 and equality as 0 . The sum gives the unsimilarity-score between the compared data frames - sequences of strings.

orderByDistance takes dfs (df1) and a query df (df2), and the columns which should be compared, and column by which it should be split dfs (here "ID"). First it splits dfs , then it collects N rows of each sub-df (preparation for comparison), and then it applies orderedDistances on each sub.df with ensured N rows (N=number or rows of query df).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM