简体   繁体   中英

How to apply tf-idf to whole dataset (training and testing dataset) instead of only training dataset within naive bayes classifier class?

I have naive bayes classifier class that classifies mails as either spam or ham with tf-idf implemented in it already. However, the tf-idf section only calculates tf-idf for the training dataset.

This is the Classifier class class SpamClassifier(object): def init (self, traindata): self.mails, self.labels = traindata['Review'], traindata['Polarity']

    def train(self):
        self.calc_TF_and_IDF()
        self.calc_TF_IDF()

    def calc_TF_and_IDF(self):
        noOfMessages = self.mails.shape[0]
        self.spam_mails, self.ham_mails = self.labels.value_counts()[1], self.labels.value_counts()[0]
        self.total_mails = self.spam_mails + self.ham_mails
        self.spam_words = 0
        self.ham_words = 0
        self.tf_spam = dict()
        self.tf_ham = dict()
        self.idf_spam = dict()
        self.idf_ham = dict()
        for i in range(noOfMessages):
            message = self.mails[i]
            count = list() #To keep track of whether the word has ocured in the message or not.
                           #For IDF
            for word in message:
                if self.labels[i]:
                    self.tf_spam[word] = self.tf_spam.get(word, 0) + 1
                    self.spam_words += 1
                else:
                    self.tf_ham[word] = self.tf_ham.get(word, 0) + 1
                    self.ham_words += 1
                if word not in count:
                    count += [word]
            for word in count:
                if self.labels[i]:
                    self.idf_spam[word] = self.idf_spam.get(word, 0) + 1
                else:
                    self.idf_ham[word] = self.idf_ham.get(word, 0) + 1

    def calc_TF_IDF(self):
        self.prob_spam = dict()
        self.prob_ham = dict()
        self.sum_tf_idf_spam = 0
        self.sum_tf_idf_ham = 0
        for word in self.tf_spam:
            self.prob_spam[word] = (self.tf_spam[word]) * log((self.spam_mails + self.ham_mails) \
                                                          / (self.idf_spam[word] + self.idf_ham.get(word, 0)))
            self.sum_tf_idf_spam += self.prob_spam[word]
        for word in self.tf_spam:
            self.prob_spam[word] = (self.prob_spam[word] + 1) / (self.sum_tf_idf_spam + len(list(self.prob_spam.keys())))

        for word in self.tf_ham:
            self.prob_ham[word] = (self.tf_ham[word]) * log((self.spam_mails + self.ham_mails) \
                                                          / (self.idf_spam.get(word, 0) + self.idf_ham[word]))
            self.sum_tf_idf_ham += self.prob_ham[word]
        for word in self.tf_ham:
            self.prob_ham[word] = (self.prob_ham[word] + 1) / (self.sum_tf_idf_ham + len(list(self.prob_ham.keys())))


        self.prob_spam_mail, self.prob_ham_mail = self.spam_mails / self.total_mails, self.ham_mails / self.total_mails 

    def classify(self, processed_message):
        pSpam, pHam = 0, 0
        for word in processed_message:                
            if word in self.prob_spam:
                pSpam += log(self.prob_spam[word])
            else:
                pSpam -= log(self.sum_tf_idf_spam + len(list(self.prob_spam.keys())))
            if word in self.prob_ham:
                pHam += log(self.prob_ham[word])
            else:
                pHam -= log(self.sum_tf_idf_ham + len(list(self.prob_ham.keys()))) 
            pSpam += log(self.prob_spam_mail)
            pHam += log(self.prob_ham_mail)
        return pSpam >= pHam

    def predict(self, testdata):
        result = []
        for (i, message) in enumerate(testdata):
            #processed_message = process_message(message)
            result.append(int(self.classify(message)))
        return result

This is how i call the classifier

sc_tf_idf = SpamClassifier(traindata)
sc_tf_idf.train()
preds_tf_idf = sc_tf_idf.predict(testdata['Review'])
testdata['Predictions'] = preds_tf_idf
print(testdata['Polarity'], testdata['Predictions'])

How would i apply tf-idf calculation within the classifier to the whole dataset(training and testing dataset)?

One should not be calculating the tf-idf on train and test data together. The datasets should first be separated into train and test (and validation) and then you can calculate tf-idf for each one separately. If you calculate the tf-idf before data separation, the model will learn some of the "features" of test/validation data and output really optimistic performance. You can refer to the detailed answers here .

Additionally, you can use the tfidfvectorizer from sklearn.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM