简体   繁体   English

R按组提取列中最常见的单词/ ngrams

[英]R extract most common word(s) / ngrams in a column by group

I wish to extract main keywords from the column 'title', for each group (1st column).我希望从“标题”列中为每个组(第一列)提取主要关键字。

数据

Desired result in column 'desired title': “所需标题”列中的所需结果:

想要的

Reproducible data:可重现的数据:

myData <- 
structure(list(group = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3), title = c("mentoring aug 8th 2018", 
"mentoring aug 9th 2017", "mentoring aug 9th 2018", "mentoring august 31", 
"mentoring blue care", "mentoring cara casual", "mentoring CDP", 
"mentoring cell douglas", "mentoring centurion", "mentoring CESO", 
"mentoring charlotte", "medication safety focus", "medication safety focus month", 
"medication safety for nurses 2017", "medication safety formulations errors", 
"medication safety foundations care", "medication safety general", 
"communication surgical safety", "communication tips", "communication tips for nurses", 
"communication under fire", "communication webinar", "communication welling", 
"communication wellness")), row.names = c(NA, -24L), class = c("tbl_df", 
"tbl", "data.frame"))

I've looked into record linkage solutions, but that's mainly for grouping the full titles.我研究了记录链接解决方案,但这主要是为了对完整标题进行分组。 Any suggestions would be great.任何建议都会很棒。

I concatenated all titles by group, and tokenized them:我按组连接所有标题,并将它们标记化:

library(dplyr)
myData <-
  topic_modelling %>% 
  group_by(group) %>% 
  mutate(titles = paste0(title, collapse = " ")) %>%
  select(group, titles) %>% 
  distinct()

myTokens <- myData %>% 
  unnest_tokens(word, titles) %>% 
  anti_join(stop_words, by = "word")
myTokens

Below is the resulting dataframe:下面是结果数据框: 令牌

# finding top ngrams
library(textrank)

stats <- textrank_keywords(myTokens$word, ngram_max = 3, sep = " ")
stats <- subset(stats$keywords, ngram > 0 & freq >= 3)
head(stats, 5)

I'm happy with the result:我对结果很满意: 资源

While applying the algorithm to my real data of about 100000 lines, I made a function to tackle the problem group by group:在将算法应用于我大约 100000 行的真实数据时,我做了一个函数来逐组解决问题:

# FUNCTION: TOP NGRAMS ----
find_top_ngrams <- function(titles_concatenated)
{
  myTest <-
    titles_concatenated %>%
    as_tibble() %>%
    unnest_tokens(word, value) %>%
    anti_join(stop_words, by = "word")
  
  stats <- textrank_keywords(myTest$word, ngram_max = 4, sep = " ")
  stats <- subset(stats$keywords, ngram > 1 & freq >= 5)
  top_ngrams <- head(stats, 5)
  
  top_ngrams <- tibble(top_ngrams)
  
  return(top_ngrams)
  
  # print(top_ngrams)
  
}


for (i in 1:5){
  find_top_ngrams(myData$titles[i])
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM