I tried naive bayes in both python and R and got different AUROC values. Why would that be the case?
R Code:
library(bnlearn)
library(pROC)
library(tm)
corpus <- VCorpus(VectorSource(paste(data$TEXT, sep = ' ')))
dtm <- DocumentTermMatrix(corpus, control = list(tolower = TRUE,
removeNumbers = FALSE,
stopwords = TRUE,
removePunctuation = TRUE,
stemming = TRUE))
convert_codes <- function(x) { x <- ifelse(x > 0, 1, 0) }
dtm <- apply(dtm, MARGIN = 2,convert_codes)
dtm <- as.data.frame(dtm)
model <- naive.bayes(dtm, approval, colnames(dtm)[-length(dtm)])
preds <- predict(model, dtm, prior = c(0.5, 0.5), prob = TRUE)
data$SCORE <- t(attr(preds, "prob"))[,2]
data$SCORE[is.nan(data$SCORE)] <- 0
print(auc(data$APPROVAL, data$SCORE))
Result = 0.93
Python Code:
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import roc_auc_score
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import MultinomialNB
pipe = Pipeline([
('vectorizer', CountVectorizer()),
('model', MultinomialNB())
])
pipe.fit(data["TEXT"], data["APPROVAL"])
preds = pipe.predict_proba(data["TEXT"])
print(roc_auc_score(data["APPROVAL"], preds[:,1]))
Result = 0.76
Why is there such a big discrepancy?
The pipelines you defined in R and Python are not the same:
weighting
parameter of DocumentTermMatrix
defaults to weightTf
and thus does not take the idf component into account. TfidfVectorizer
has the default parameter use_idf=True
, hence it uses the idf component.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.