[英]regular expression: extract part of url and create new column in r
I have records of URL and I want to extract part of it and create new columns. 我有URL记录,我想提取其中的一部分并创建新列。 In my example, I would like to consider the number after
"groups"
as group_id
and the number of dicussion_topics
as discussion_id
df like: 在我的例子,我想考虑后的数字
"groups"
为group_id
和数量dicussion_topics
作为discussion_id
DF这样的:
user url
1 "https://test.com/groups/3276/discussion_topics/3939"
2 "https://test.com/groups/34/discussion_topics/11"
3 "https://test.com/groups/3276"
4 "https://test.com/groups/other"
I want result like 我想要类似的结果
user group_id dicussion_id
1 3276 3939
2 34 11
3 3276 NA
4 NA NA
How can I do it with the regular expression in R? 如何在R中使用正则表达式呢? thx
谢谢
dat$group_id=as.numeric(sub(".*/groups/(\\d+).*|.*","\\1",dat$url))
dat$discussion=as.numeric(sub(".*/discussion_topics/(\\d+).*|.*","\\1",dat$url))
dat
user url group_id discussion
1 1 https://test.com/groups/3276/discussion_topics/3939 3276 3939
2 2 https://test.com/groups/34/discussion_topics/11 34 11
3 3 https://test.com/groups/3276 3276 NA
4 4 https://test.com/groups/other NA NA
Another version with stringi
package and lookbehind regex 带有
stringi
程序包和正则表达式stringi
另一个版本
Update: Admittedly, the function of @Onyambu is faster. 更新:诚然,@ Onyambu的功能更快。 See the benchmark.
参见基准。 Update2: Added the third version the benchmark.
Update2:添加了第三个版本的基准测试。 No improvement concerning speed.
关于速度没有改善。
library(stringi)
extract_info = function(x) {
x$group = stri_extract_all_regex(x$url, "(?<=groups/)\\d+")
x$topic = stri_extract_all_regex(x$url, "(?<=discussion_topics/)\\d+")
x
}
extract_info(dat)
# user url group topic
# 1 1 https://test.com/groups/3276/discussion_topics/3939 3276 3939
# 2 2 https://test.com/groups/34/discussion_topics/11 34 11
# 3 3 https://test.com/groups/3276 3276 NA
# 4 4 https://test.com/groups/other NA NA
extract_info2 = function(dat) {
dat$group_id=as.numeric(sub(".*/groups/(\\d+).*|.*","\\1",dat$url))
dat$discussion=as.numeric(sub(".*/discussion_topics/(\\d+).*|.*","\\1",dat$url))
dat
}
extract_info3 = function(data) {
df$group_id <- as.numeric(regmatches(df$url, gregexpr(".*groups/*\\K.\\d+", df$url, perl=TRUE)))
df$discussion <- as.numeric(regmatches(df$url, gregexpr(".*topics/*\\K.\\d+", df$url, perl=TRUE)))
df
}
microbenchmark::microbenchmark(
extract_info(dat)
,extract_info2(dat)
,extract_info3(dat)
)
# Unit: microseconds
# expr min lq mean median uq max neval
# extract_info(dat) 152.769 160.269 172.1629 170.5325 176.0590 300.011 100
# extract_info2(dat) 99.872 106.386 120.9876 117.2415 125.7285 226.981 100
# extract_info3(dat) 285.799 301.984 378.7235 308.8925 323.3000 6684.297 100
Here is another option: 这是另一个选择:
df$group_id <- as.numeric(regmatches(df$url, gregexpr(".*groups/*\\K.\\d+", df$url, perl=TRUE)))
df$discussion <- as.numeric(regmatches(df$url, gregexpr(".*topics/*\\K.\\d+", df$url, perl=TRUE)))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.