简体   繁体   English

计算R中标记之间的单词

[英]counting words between markers in R

I have several text files which I import into a corpus. 我有几个导入到语料库的文本文件。 Each text has several parts that are supposedly written in different days and marked with #. 每个文本都有几部分,据称是在不同的日子写的,并用#标记。 A week is marked with $. 一周标记为$。 On each text how I can count how many words in a day and how many in a week? 在每个文本上,我如何计算一天中有多少个单词以及一周中有几个单词? the text T1 has days that are marked in the end with # and I need to count the words each day has. 文本T1的日期以#结尾,我需要计算每天的单词数。 The weeks are delimited by $ and I need to know also the number of words in a week Also I have the text T2 and T3 ...Tn the question is how I do this in R with quanteda 周由$分隔,我还需要知道一周中的单词数。我也有文本T2和T3 ... Tn问题是我如何在R中使用Quanteda做到这一点

<T1>
 (25.02.2009) This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage.                                                        

# (26.02.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.                       

# (28.02.2009) Borrowing from automated “text as data” approaches, we show how statistical scaling models can be applied to hand-coded content analysis to improve estimates of political parties’ left-right policy positions. We apply a Bayesian item-response theory (IRT) model to category counts from coded party manifestos, treating the categories as “items” and policy positions as a latent variable. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on out of sample fitting, political theory, assertion, or guesswork. This approach not only prevents the misspecification endemic to a fixed-index approach, but also works well even with items that are not specifically designed to measure ideological positioning.              
# (02.03.2009) This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage. .                                           

# (03.03.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.                                    

#
($)
 (04.03.2009) Borrowing from automated “text as data” approaches, we show how statistical scaling models can be applied to hand-coded content analysis to improve estimates of political parties’ left-right policy positions. We apply a Bayesian item-response theory (IRT) model to category counts from coded party manifestos, treating the categories as “items” and policy positions as a latent variable. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on out of sample fitting, political theory, assertion, or guesswork. This approach not only prevents the misspecification endemic to a fixed-index approach, but also works well even with items that are not specifically designed to measure ideological positioning.                                      

# (05.03.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.  
# (06.03.2009)  This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage. 

# (07.03.2009)  This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage. 

# (08.03.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.                    

# (09.03.2009) Borrowing from automated “text as data” approaches, we show how statistical scaling models can be applied to hand-coded content analysis to improve estimates of political parties’ left-right policy positions. We apply a Bayesian item-response theory (IRT) model to category counts from coded party manifestos, treating the categories as “items” and policy positions as a latent variable. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on out of sample fitting, political theory, assertion, or guesswork. This approach not only prevents the misspecification endemic to a fixed-index approach, but also works well even with items that are not specifically designed to measure ideological positioning.                          

# (10.03.2009) This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage.                             

#
($)

Those texts look very familiar! 这些文本看起来很熟悉!

If I assign what you have above to txt , then you can wrap it in a quanteda corpus and then use corpus_segment() to split it on the tags. 如果我将上面的内容分配给txt ,则可以将其包装在Quanteda语料库中,然后使用corpus_segment()在标签上进行分割。

library("quanteda")
## Package version: 1.5.0

corp <- corpus(txt) %>%
  corpus_segment(pattern = "($)", valuetype = "fixed", pattern_position = "after") %>%
  corpus_segment(pattern = "\\(\\d{2}\\.\\d{2}\\.\\d{4}\\)", valuetype = "regex", pattern_position = "before")

The first segmentation splits along the "weeks", but since there is no tag there, we just segment again to get the date. 第一个细分沿“周”分割,但是由于那里没有标签,我们只需要再次细分即可获取日期。 This produces: 这将产生:

sapply(head(texts(corp)), substring, 1, 100)
##                                                                                                text1.1.1 
## "This chapter thoroughly describes the idea of analyzing text \"as data\" with a social science focus. " 
##                                                                                                text1.1.2 
##   "Probabilistic methods for classifying text form a rich tradition in machine learning and natural lan" 
##                                                                                                text1.1.3 
## "Borrowing from automated \"text as data\" approaches, we show how statistical scaling models can be ap" 
##                                                                                                text1.1.4 
## "This chapter thoroughly describes the idea of analyzing text \"as data\" with a social science focus. " 
##                                                                                                text1.1.5 
##   "Probabilistic methods for classifying text form a rich tradition in machine learning and natural lan" 
##                                                                                                text1.2.1 
## "Borrowing from automated \"text as data\" approaches, we show how statistical scaling models can be ap"

Better to tidy up the extracted tag and make it into an actual date, which you could later use to split into weeks or whatever other date ranges you want. 最好整理提取的标签并将其设置为实际日期,以后可以将其拆分为几周或所需的任何其他日期范围。

# tidy up docvars
names(docvars(corp))[1] <- "date"
docvars(corp, "date") <-
  stringi::stri_replace_all_fixed(docvars(corp, "date"), c("(", ")"), c("", ""), vectorize_all = FALSE) %>%
  lubridate::dmy()

summary(corp)
## Corpus consisting of 12 documents:
## 
##       Text Types Tokens Sentences       date
##  text1.1.1    83    135         6 2009-02-25
##  text1.1.2   119    195         7 2009-02-26
##  text1.1.3    96    137         5 2009-02-28
##  text1.1.4    83    136         6 2009-03-02
##  text1.1.5   119    195         7 2009-03-03
##  text1.2.1    96    137         5 2009-03-04
##  text1.2.2   119    195         7 2009-03-05
##  text1.2.3    83    135         6 2009-03-06
##  text1.2.4    83    135         6 2009-03-07
##  text1.2.5   119    195         7 2009-03-08
##  text1.2.6    96    137         5 2009-03-09
##  text1.2.7    83    135         6 2009-03-10
## 
## Source: /private/var/folders/1v/ps2x_tvd0yg0lypdlshg_vwc0000gp/T/RtmpDG9tad/reprexd97c6e16bef8/* on x86_64 by kbenoit
## Created: Sun Jul 28 11:29:45 2019
## Notes: corpus_segment.corpus(., pattern = "\\(\\d{2}\\.\\d{2}\\.\\d{4}\\)", valuetype = "regex", pattern_position = "before")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM