[英]Why I get a very low accuracy with LSTM and pretrained word2vec?
[英]Classification accuracy is too low (Word2Vec)
我正在研究一个由 word2vec 解决的多标签情感分类问题。 这是我从几个教程中学到的代码。 现在准确率很低。 大约 0.02 这告诉我我的代码有问题。 但我找不到它。 我为 TF-IDF 和 BOW 尝试了这段代码(显然除了 word2vec 部分),我得到了更好的准确度分数,例如 0.28,但似乎这有点错误:
np.set_printoptions(threshold=sys.maxsize)
wv = gensim.models.KeyedVectors.load_word2vec_format("E:\\GoogleNews-vectors-negative300.bin", binary=True)
wv.init_sims(replace=True)
#Pre-Processor Function
pre_processor = TextPreProcessor(
omit=['url', 'email', 'percent', 'money', 'phone', 'user',
'time', 'url', 'date', 'number'],
normalize=['url', 'email', 'percent', 'money', 'phone', 'user',
'time', 'url', 'date', 'number'],
segmenter="twitter",
corrector="twitter",
unpack_hashtags=True,
unpack_contractions=True,
tokenizer=SocialTokenizer(lowercase=True).tokenize,
dicts=[emoticons]
)
#Averaging Words Vectors to Create Sentence Embedding
def word_averaging(wv, words):
all_words, mean = set(), []
for word in words:
if isinstance(word, np.ndarray):
mean.append(word)
elif word in wv.vocab:
mean.append(wv.syn0norm[wv.vocab[word].index])
all_words.add(wv.vocab[word].index)
if not mean:
logging.warning("cannot compute similarity with no input %s", words)
# FIXME: remove these examples in pre-processing
return np.zeros(wv.vector_size,)
mean = gensim.matutils.unitvec(np.array(mean).mean(axis=0)).astype(np.float32)
return mean
def word_averaging_list(wv, text_list):
return np.vstack([word_averaging(wv, post) for post in text_list ])
#Secondary Word-Averaging Method
def get_mean_vector(word2vec_model, words):
# remove out-of-vocabulary words
words = [word for word in words if word in word2vec_model.vocab]
if len(words) >= 1:
return np.mean(word2vec_model[words], axis=0)
else:
return []
#Loading data
raw_train_tweets = pandas.read_excel('E:\\train.xlsx').iloc[:,1] #Loading all train tweets
train_labels = np.array(pandas.read_excel('E:\\train.xlsx').iloc[:,2:13]) #Loading corresponding train labels (11 emotions)
raw_test_tweets = pandas.read_excel('E:\\test.xlsx').iloc[:,1] #Loading 300 test tweets
test_gold_labels = np.array(pandas.read_excel('E:\\test.xlsx').iloc[:,2:13]) #Loading corresponding test labels (11 emotions)
print("please wait")
#Pre-Processing
train_tweets=[]
test_tweets=[]
for tweets in raw_train_tweets:
train_tweets.append(pre_processor.pre_process_doc(tweets))
for tweets in raw_test_tweets:
test_tweets.append(pre_processor.pre_process_doc(tweets))
#Vectorizing
train_array = word_averaging_list(wv,train_tweets)
test_array = word_averaging_list(wv,test_tweets)
#Predicting and Evaluating
clf = LabelPowerset(LogisticRegression(solver='lbfgs', C=1, class_weight=None))
clf.fit(train_array,train_labels)
predicted= clf.predict(test_array)
intersect=0
union=0
accuracy=[]
for i in range(0,3250): #i have 3250 test tweets.
for j in range(0,11): #11 emotions
if predicted[i,j]&test_gold_labels[i,j]==1:
intersect+=1
if predicted[i,j]|test_gold_labels[i,j]==1:
union+=1
accuracy.append(intersect/union) if union !=0 else accuracy.append(0.0)
intersect=0
union=0
print(np.mean(accuracy))
结果:
0.4674498168498169
我打印了预测变量(用于推文 0 到 10)以查看它的样子:
(0, 0) 1
(0, 2) 1
(2, 0) 1
(2, 2) 1
(3, 4) 1
(3, 6) 1
(4, 0) 1
(4, 2) 1
(5, 0) 1
(5, 2) 1
(6, 0) 1
(6, 2) 1
(7, 0) 1
(7, 2) 1
(8, 4) 1
(8, 6) 1
(9, 3) 1
(9, 8) 1
如您所见,它只显示 1。 例如 (6,2) 表示在推文编号 6 中,情绪编号 2 为 1。 (9,8) 表示在推文编号 9 中,情绪编号为 1。其他情绪视为 0。但您可以这样想象为了更好地理解我在 Accuracy 方法中所做的事情:
gold emotion for tweet 0: [1 1 0 0 0 0 1 0 0 0 1]
predicted emotion for tweet 0: [1 0 1 0 0 0 0 0 0 0 0]
我对索引一一使用 union 和 intersect 。 1 到 1。1 到 1。0 到 1,直到黄金情绪 11 到预测情绪 11。我在两个 for 循环中对所有推文都这样做了。
现在我想使用 gensim 在我的推文数据集上创建 Word2Vec 向量。 我更改了上面代码的某些部分,如下所示:
#Averaging Words Vectors to Create Sentence Embedding
def word_averaging(wv, words):
all_words, mean = set(), []
for word in words:
if isinstance(word, np.ndarray):
mean.append(word)
elif word in wv.vocab:
mean.append(wv.syn0norm[wv.vocab[word].index])
all_words.add(wv.vocab[word].index)
if not mean:
logging.warning("cannot compute similarity with no input %s", words)
# FIXME: remove these examples in pre-processing
return np.zeros(wv.vector_size,)
mean = gensim.matutils.unitvec(np.array(mean).mean(axis=0)).astype(np.float32)
return mean
def word_averaging_list(wv, text_list):
return np.vstack([word_averaging(wv, post) for post in text_list ])
#Loading data
raw_aggregate_tweets = pandas.read_excel('E:\\aggregate.xlsx').iloc[:,0] #Loading all train tweets
raw_train_tweets = pandas.read_excel('E:\\train.xlsx').iloc[:,1] #Loading all train tweets
train_labels = np.array(pandas.read_excel('E:\\train.xlsx').iloc[:,2:13]) #Loading corresponding train labels (11 emotions)
raw_test_tweets = pandas.read_excel('E:\\test.xlsx').iloc[:,1] #Loading 300 test tweets
test_gold_labels = np.array(pandas.read_excel('E:\\test.xlsx').iloc[:,2:13]) #Loading corresponding test labels (11 emotions)
print("please wait")
#Pre-Processing
aggregate_tweets=[]
train_tweets=[]
test_tweets=[]
for tweets in raw_aggregate_tweets:
aggregate_tweets.append(pre_processor.pre_process_doc(tweets))
for tweets in raw_train_tweets:
train_tweets.append(pre_processor.pre_process_doc(tweets))
for tweets in raw_test_tweets:
test_tweets.append(pre_processor.pre_process_doc(tweets))
print(len(aggregate_tweets))
#Vectorizing
w2v_model = gensim.models.Word2Vec(aggregate_tweets, min_count = 10, size = 300, window = 8)
print(w2v_model.wv.vectors.shape)
train_array = word_averaging_list(w2v_model.wv,train_tweets)
test_array = word_averaging_list(w2v_model.wv,test_tweets)
但我收到此错误:
TypeError Traceback (most recent call last)
<ipython-input-1-8a5fe4dbf144> in <module>
110 print(w2v_model.wv.vectors.shape)
111
--> 112 train_array = word_averaging_list(w2v_model.wv,train_tweets)
113 test_array = word_averaging_list(w2v_model.wv,test_tweets)
114
<ipython-input-1-8a5fe4dbf144> in word_averaging_list(wv, text_list)
70
71 def word_averaging_list(wv, text_list):
---> 72 return np.vstack([word_averaging(wv, post) for post in text_list ])
73
74 #Averaging Words Vectors to Create Sentence Embedding
<ipython-input-1-8a5fe4dbf144> in <listcomp>(.0)
70
71 def word_averaging_list(wv, text_list):
---> 72 return np.vstack([word_averaging(wv, post) for post in text_list ])
73
74 #Averaging Words Vectors to Create Sentence Embedding
<ipython-input-1-8a5fe4dbf144> in word_averaging(wv, words)
58 mean.append(word)
59 elif word in wv.vocab:
---> 60 mean.append(wv.syn0norm[wv.vocab[word].index])
61 all_words.add(wv.vocab[word].index)
62
TypeError: 'NoneType' object is not subscriptable
不清楚您的TextPreProcessor
或SocialTokenizer
类可能会做什么。 您应该编辑您的问题以显示他们的代码,或显示结果文本的一些示例,以确保它按您的预期工作。 (例如:显示all_tweets
的前几个和最后几个条目。)
您的行all_tweets = train_tweets.append(test_tweets)
不太可能按照您的预期执行。 (它会把整个列表test_tweets
作为最后一个元素all_tweets
-但随后返回None
它分配给all_tweets
你。 Word2Vec
那么模型可能是空的-你应该启用日志记录信息看其进度和审查输出异常,并添加代码后训练以打印有关模型的一些详细信息,以确认发生了有用的训练。)
您确定train_tweets
是适合您的管道的.fit()
格式吗? (发送到Word2Vec
训练的文本似乎已通过.split()
,但pandas.Series
train_tweets
的文本可能从未分词。)
通常,一个好主意是启用日志记录,并在每个步骤之后添加更多代码,通过检查属性值或打印较长集合的摘录来确认每个步骤都具有预期的效果。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.