簡體   English   中英

實現scikit-learn機器學習算法

[英]Implementing scikit-learn machine learning algorithm

鏈接: https//stackoverflow.com/questions/18154278/is-there-a-maximum-size-for-the-nltk-naive-bayes-classifer

我在代碼中實現scikit-learn機器學習算法時遇到麻煩。 scikit-learn的作者之一在上述問題上為我提供了幫助,但我不能完全解決這個問題,並且由於我最初的問題是關於另一件事,所以我認為最好打開一個新問題。 。

該代碼將輸入推文並將其文本和情感讀入字典中。 然后,它解析文本的每一行,並將文本添加到一個列表中,並將其情感添加到另一個列表中(在上述鏈接的問題中,根據作者的建議)。

但是,盡管我使用鏈接中的代碼並盡我所能查找API,但我還是覺得有些不足。 運行下面的代碼,首先讓我得到一串用冒號分隔的輸出,如下所示:

  (0, 299)  0.270522159585
  (0, 271)  0.32340892262
  (0, 266)  0.361182814311
  : :
  (48, 123) 0.240644787937

其次是:

['negative', 'positive', 'negative', 'negative', 'positive', 'negative', 'negative', 'negative', etc]

接着:

ValueError: empty vocabulary; perhaps the documents only contain stop words

我是否以錯誤的方式分配了分類器? 這是我的代碼:

test_file = 'RawTweetDataset/SmallSample.csv'
#test_file = 'RawTweetDataset/Dataset.csv'
sample_tweets = 'SampleTweets/FlumeData2.txt'
csv_file = csv.DictReader(open(test_file, 'rb'), delimiter=',', quotechar='"')

tweetsDict = {}

for line in csv_file:
    tweetsDict.update({(line['SentimentText'],line['Sentiment'])})

tweets = []
labels = []
shortenedText = ""
for (text, sentiment) in tweetsDict.items():
    text = HTMLParser.HTMLParser().unescape(text.decode("cp1252", "ignore"))
    exclude = set(string.punctuation)
    for punct in string.punctuation:
        text = text.replace(punct,"")
    cleanedText = [e.lower() for e in text.split() if not e.startswith(('http', '@'))]
    shortenedText = [e.strip() for e in cleanedText if e not in exclude]

    text = ' '.join(ch for ch in shortenedText if ch not in exclude)
    tweets.append(text.encode("utf-8", "ignore"))
    labels.append(sentiment)

vectorizer = TfidfVectorizer(input='content')
X = vectorizer.fit_transform(tweets)
y = labels
classifier = MultinomialNB().fit(X, y)

X_test = vectorizer.fit_transform(sample_tweets)
y_pred = classifier.predict(X_test)

更新:當前代碼:

all_files = glob.glob (tweet location)
for filename in all_files:
    with open(filename, 'r') as file:
        for line file.readlines():
            X_test = vectorizer.transform([line])
            y_pred = classifier.predict(X_test)
            print line
            print y_pred

這總是會產生類似:

happy bday trish
['negative'] << Never changes, always negative

問題在這里:

X_test = vectorizer.fit_transform(sample_tweets)

fit_transform在訓練集而不是測試集上調用fit_transform 在測試集上,調用transform

另外, sample_tweets是文件名。 您應先將其打開,並從中讀取其tweet,然后再將其傳遞給矢量化器。 如果這樣做,那么您最終應該可以做類似的事情

for tweet, sentiment in zip(list_of_sample_tweets, y_pred):
    print("Tweet: %s" % tweet)
    print("Sentiment: %s" % sentiment)

要在TextBlob中做到這一點(如在注釋中提到的那樣),您可以

from text.blob import TextBlob

tweets = ['This is tweet one, and I am happy.', 'This is tweet two and I am sad']

for tweet in tweets:
    blob = TextBlob(tweet)
    print blob.sentiment #Will return (Polarity, Subjectivity)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM