简体   繁体   中英

Sentiment analysis Lexicon

i have created a corpus and processed it using tm package, a snippet below

cleanCorpus<-function(corpus){

corpus.tmp <- tm_map(corpus, content_transformer(tolower))
corpus.tmp <- tm_map(corpus.tmp, removePunctuation)
corpus.tmp <- tm_map(corpus.tmp, removeNumbers)
corpus.tmp <- tm_map(corpus.tmp, removeWords,stopwords("english"))
corpus.tmp <- tm_map(corpus.tmp, stemDocument)
corpus.tmp <- tm_map(corpus.tmp, stripWhitespace)

return(corpus.tmp)
}

myCorpus <-Corpus(VectorSource(Data$body),readerControl =  list(reader=readPlain))

cln.corpus<-cleanCorpus(myCorpus)

Now i am using the mpqa lexicon to get the total number of positive words and negative words in each document of the corpus.

so i have the list with me as

pos.words <- lexicon$word[lexicon$Polarity=="positive"]
neg.words <- lexicon$word[lexicon$Polarity=="negative"] 

How should i go about comparing the content of each document with the positive and negative list and get the counts of both per document? i checked other posts on tm dictionaries but looks like the feature is withdrawn.

For example

library(tm)
data("crude")
myCorpus <- crude[1:2]
pos.words <- c("advantag", "easy", "cut")
neg.words <- c("problem", "weak", "uncertain")
weightSenti <- structure(function (m) {
    m$v <- rep(1, length(m$v))
    m$v[rownames(m) %in% neg.words] <- m$v[rownames(m) %in% neg.words] * -1
    attr(m, "weighting") <- c("binarySenti", "binSenti")
    m
}, class = c("WeightFunction", "function"), name = "binarySenti", acronym = "binSenti")
tdm <- TermDocumentMatrix(cln.corpus, control=list(weighting=weightSenti, dictionary=c(pos.words, neg.words)))
colSums(as.matrix(tdm))
# 127 144 
#   2  -2

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM