I am using this dataset https://archive.ics.uci.edu/ml/datasets/Eco-hotel
I am trying to figure out how to count the frequency of certain words like "room" or "vacation" within each column. I have attempted following tutorials online, but unfortunately, I have had no luck.
Using the iris dataset as an example, what you can do is:
library(tidyverse)
iris %>%
summarize(across(everything(), ~ sum(str_detect(., 'setosa'))))
Of course, you'd need to change the seqrch term to what you need.
If you want to have dedicated columns for each of your search patterns, you could alternatively do sth. like:
df <- data.frame(x = sample(letters, 10, replace = TRUE),
y = sample(letters, 10, replace = TRUE))
df |>
summarize(across(c(x, y), ~sum(str_count(., c("u"))), .names = "{.col}_u"),
across(c(x, y), ~sum(str_count(., c("g"))), .names = "{.col}_g"))
Here I'M searching for letters "u" and "g", respectively.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.