簡體   English   中英

VSCODE中的Python處理時間超過30分鍾

[英]Python Processing time 30+ mins in VSCODE

我是編程的新手,所以請耐心並保持簡單,因為我上周剛開始學習python。 我願意張貼任何您需要更多信息的信息,但請記住,我是n00b。

我的問題:

我將MACOSX Sierra與python 2.7的Visual Studio Code一起使用,並遇到了YUGE數據處理時間(即5+分鍾,接近10+分鍾,在此特定代碼上30+分鍾)

有什么建議么? 我真的無法在網上任何地方的解決方案上找到太多。

運行這些進程時,我的活動監視器中的CPU穩定達到98%,我不知道這是否正常或如何加快速度。

警告:

在簡單的編碼中,我的處理時間並不算太糟,但是當引入算法時,事情似乎陷入了困境,令人沮喪。

下面是我正在使用的編碼,除了運行時間瘋狂,最后輸出包括:



    import nltk
    import random
    from nltk.corpus import movie_reviews
    from nltk.classify.scikitlearn import SklearnClassifier
    import pickle

    from sklearn.naive_bayes import MultinomialNB, GaussianNB, BernoulliNB
    from sklearn.linear_model import LogisticRegression, SGDClassifier
    from sklearn.svm import SVC, LinearSVC, NuSVC

    from nltk.classify import ClassifierI
    from statistics import mode


    class VoteClassifier(ClassifierI):
        def __init__(self, *classifiers):
            self._classifiers = classifiers

        def classify(self, features):
            votes = []
            for c in self._classifiers:
                v = c.classify(features)
                votes.append(v)
            return mode(votes)

        def confidence(self, features):
            votes = []
            for c in self._classifiers:
                v = c.classify(features)
                votes.append(v)

            choice_votes = votes.count(mode(votes))
            conf = choice_votes / len(votes)
            return conf



    documents = [(list(movie_reviews.words(fileid)), category)
                for category in movie_reviews.categories()
                for fileid in movie_reviews.fileids(category)]

    random.shuffle(documents)

    all_words = []
    for w in movie_reviews.words():
        all_words.append(w.lower())

    all_words = nltk.FreqDist(all_words)

    word_features = list(all_words.keys())[:3000]

    def find_features(document):
        words = set(document)
        features = {}
        for w in word_features:
            features[w] = (w in words)

        return features

    # print((find_features(movie_reviews.words('neg/cv000_29416.txt'))))

    featuresets = [(find_features(rev), category) for (rev, category) in documents]

    training_set = featuresets[:1900]
    testing_set = featuresets[:1900:]

    # classifier = nltk.NaiveBayesClassifier.train(training_set)

    classifier_f = open("naivebayes.pickle", "rb")
    classifier = pickle.load(classifier_f)
    classifier_f.close()

    print("Original Naive Bayes Algo accuracy percent:", (nltk.classify.accuracy(classifier, testing_set))*100)
    classifier.show_most_informative_features(15)

    # save_classifier = open("naivebayes.pickle", "wb")
    # pickle.dump(classifier, save_classifier)
    # save_classifier.close()

    MNB_classifier = SklearnClassifier(MultinomialNB())
    MNB_classifier.train(training_set)
    print("MNB_classifier accuracy percent:", (nltk.classify.accuracy(MNB_classifier, testing_set))*100)

    # GaussianNB_classifier = SklearnClassifier(GaussianNB())
    # GaussianNB_classifier.train(training_set)
    # print("GaussianNB_classifier accuracy percent:", (nltk.classify.accuracy(GaussianNB_classifier, testing_set))*100)

    BernoulliNB_classifier = SklearnClassifier(BernoulliNB())
    BernoulliNB_classifier.train(training_set)
    print("BernoulliNB_classifier accuracy percent:", (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))*100)

    LogisticRegression_classifier = SklearnClassifier(LogisticRegression())
    LogisticRegression_classifier.train(training_set)
    print("LogisticRegression_classifier accuracy percent:", (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))*100)

    SGDClassifier_classifier = SklearnClassifier(SGDClassifier())
    SGDClassifier_classifier.train(training_set)
    print("SGDClassifier_classifier accuracy percent:", (nltk.classify.accuracy(SGDClassifier_classifier, testing_set))*100)

    # SVC_classifier = SklearnClassifier(SVC())
    # SVC_classifier.train(training_set)
    # print("SVC_classifier accuracy percent:", (nltk.classify.accuracy(SVC_classifier, testing_set))*100)

    LinearSVC_classifier = SklearnClassifier(LinearSVC())
    LinearSVC_classifier.train(training_set)
    print("LinearSVC_classifier accuracy percent:", (nltk.classify.accuracy(LinearSVC_classifier, testing_set))*100)

    NuSVC_classifier = SklearnClassifier(NuSVC())
    NuSVC_classifier.train(training_set)
    print("NuSVC_classifier accuracy percent:", (nltk.classify.accuracy(NuSVC_classifier, testing_set))*100)

    voted_classifier = VoteClassifier(classifier, MNB_classifier, BernoulliNB_classifier, LogisticRegression_classifier, SGDClassifier_classifier, LinearSVC_classifier, NuSVC_classifier)

    print("voted_classifier accuracy percent:", (nltk.classify.accuracy(voted_classifier, testing_set))*100)

    print("Classication:", voted_classifier.classify(testing_set[0][0]), "Confidence %:", voted_classifier.confidence(testing_set[0][0])*100)

    print("Classication:", voted_classifier.classify(testing_set[1][0]), "Confidence %:", voted_classifier.confidence(testing_set[1][0])*100)
    print("Classication:", voted_classifier.classify(testing_set[2][0]), "Confidence %:", voted_classifier.confidence(testing_set[2][0])*100)
    print("Classication:", voted_classifier.classify(testing_set[3][0]), "Confidence %:", voted_classifier.confidence(testing_set[3][0])*100)
    print("Classication:", voted_classifier.classify(testing_set[4][0]), "Confidence %:", voted_classifier.confidence(testing_set[4][0])*100)
    print("Classication:", voted_classifier.classify(testing_set[5][0]), "Confidence %:", voted_classifier.confidence(testing_set[5][0])*100)



    ('Original Naive Bayes Algo accuracy percent:', 87.31578947368422)
    Most Informative Features
                  insulting = True              neg : pos    =     11.0 : 1.0
                       sans = True              neg : pos    =      9.0 : 1.0
               refreshingly = True              pos : neg    =      8.4 : 1.0
                    wasting = True              neg : pos    =      8.3 : 1.0
                 mediocrity = True              neg : pos    =      7.7 : 1.0
                  dismissed = True              pos : neg    =      7.0 : 1.0
                    customs = True              pos : neg    =      6.3 : 1.0
                     fabric = True              pos : neg    =      6.3 : 1.0
                overwhelmed = True              pos : neg    =      6.3 : 1.0
                bruckheimer = True              neg : pos    =      6.3 : 1.0
                      wires = True              neg : pos    =      6.3 : 1.0
                  uplifting = True              pos : neg    =      6.2 : 1.0
                        ugh = True              neg : pos    =      5.8 : 1.0
                     stinks = True              neg : pos    =      5.8 : 1.0
                       lang = True              pos : neg    =      5.7 : 1.0
    ('MNB_classifier accuracy percent:', 89.21052631578948)
    ('BernoulliNB_classifier accuracy percent:', 86.42105263157895)
    ('LogisticRegression_classifier accuracy percent:', 94.47368421052632)
    ('SGDClassifier_classifier accuracy percent:', 85.73684210526315)
    ('LinearSVC_classifier accuracy percent:', 99.52631578947368)
    ('NuSVC_classifier accuracy percent:', 91.52631578947368)
    ('voted_classifier accuracy percent:', 93.36842105263158)
    ('Classication:', u'pos', 'Confidence %:', 100)
    ('Classication:', u'pos', 'Confidence %:', 0)
    ('Classication:', u'neg', 'Confidence %:', 0)
    ('Classication:', u'neg', 'Confidence %:', 100)
    ('Classication:', u'neg', 'Confidence %:', 100)
    ('Classication:', u'neg', 'Confidence %:', 100)

我不確定是否有問題。 電影評論語料庫並不算大,但是訓練分類器需要很長時間...您訓練了其中的七個,具有三千個功能。 如果您開始使用更大的數據集,那么整夜訓練一個分類器就不會感到驚訝。

我建議您將訓練腳本與測試腳本分開(您需要腌制所有訓練過的模型),和/或在適當的時間打印帶有時間戳的消息,以查看哪些分類器正在消耗您的時間。 (另外:考慮從功能列表中刪除常見的“停用詞”,例如“ the”,“ a”,“。”等。)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM