簡體   English   中英

潛在語義分析結果

[英]Latent Semantic Analysis results

我正在學習 LSA 的教程並將示例切換到不同的字符串列表,我不確定代碼是否按預期工作。

當我使用教程中給出的示例輸入時,它會產生合理的答案。 然而,當我使用自己的輸入時,我得到了非常奇怪的結果。

為了進行比較,這里是示例輸入的結果:

在此處輸入圖片說明

當我使用我自己的例子時,這就是結果。 同樣值得注意的是,我似乎沒有得到一致的結果:

在此處輸入圖片說明

在此處輸入圖片說明

任何幫助弄清楚為什么我會得到這些結果將不勝感激:)

這是代碼:

import sklearn
# Import all of the scikit learn stuff
from __future__ import print_function
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.preprocessing import Normalizer
from sklearn import metrics
from sklearn.cluster import KMeans, MiniBatchKMeans
import pandas as pd
import warnings
# Suppress warnings from pandas library
warnings.filterwarnings("ignore", category=DeprecationWarning,
module="pandas", lineno=570)
import numpy


example = ["Coffee brewed by expressing or forcing a small amount of 
nearly boiling water under pressure through finely ground coffee 
beans.", 
"An espresso-based coffee drink consisting of espresso with 
microfoam (steamed milk with small, fine bubbles with a glossy or 
velvety consistency)", 
"American fast-food dish, consisting of french fries covered in 
cheese with the possible addition of various other toppings", 
"Pounded and breaded chicken is topped with sweet honey, salty 
dill pickles, and vinegar-y iceberg slaw, then served upon crispy 
challah toast.", 
"A layered, flaky texture, similar to a puff pastry."]

''''
example = ["Machine learning is super fun",
"Python is super, super cool",
"Statistics is cool, too",
"Data science is fun",
"Python is great for machine learning",
"I like football",
"Football is great to watch"]
'''

vectorizer = CountVectorizer(min_df = 1, stop_words = 'english')
dtm = vectorizer.fit_transform(example)
pd.DataFrame(dtm.toarray(),index=example,columns=vectorizer.get_feature_names()).head(10)

# Get words that correspond to each column
vectorizer.get_feature_names()

# Fit LSA. Use algorithm = “randomized” for large datasets
lsa = TruncatedSVD(2, algorithm = 'arpack')
dtm_lsa = lsa.fit_transform(dtm.astype(float))
dtm_lsa = Normalizer(copy=False).fit_transform(dtm_lsa)

pd.DataFrame(lsa.components_,index = ["component_1","component_2"],columns = vectorizer.get_feature_names())

pd.DataFrame(dtm_lsa, index = example, columns = "component_1","component_2"])

xs = [w[0] for w in dtm_lsa]
ys = [w[1] for w in dtm_lsa]
xs, ys

# Plot scatter plot of points
%pylab inline
import matplotlib.pyplot as plt
figure()
plt.scatter(xs,ys)
xlabel('First principal component')
ylabel('Second principal component')
title('Plot of points against LSA principal components')
show()

#Plot scatter plot of points with vectors
%pylab inline
import matplotlib.pyplot as plt
plt.figure()
ax = plt.gca()
ax.quiver(0,0,xs,ys,angles='xy',scale_units='xy',scale=1, linewidth = .01)
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
xlabel('First principal component')
ylabel('Second principal component')
title('Plot of points against LSA principal components')
plt.draw()
plt.show()

# Compute document similarity using LSA components
similarity = np.asarray(numpy.asmatrix(dtm_lsa) * 
numpy.asmatrix(dtm_lsa).T)
pd.DataFrame(similarity,index=example, columns=example).head(10)

問題似乎是由於您使用的示例數量少以及規范化步驟所致。 因為TrucatedSVD將您的計數向量映射到許多非常小的數字和一個相對較大的數字,當您對這些數字進行歸一化時,您會看到一些奇怪的行為。 您可以通過查看數據的散點圖來了解這一點。

dtm_lsa = lsa.fit_transform(dtm.astype(float))
fig, ax = plt.subplots()
for i in range(dtm_lsa.shape[0]):
    ax.scatter(dtm_lsa[i, 0], dtm_lsa[i, 1], label=f'{i+1}')
ax.legend()

未標准化

我會說這個圖代表了你的數據,因為兩個咖啡例子在右邊(很難用少量例子說更多)。 但是,當您規范化數據時

dtm_lsa = lsa.fit_transform(dtm.astype(float))
dtm_lsa = Normalizer(copy=False).fit_transform(dtm_lsa)
fig, ax = plt.subplots()
for i in range(dtm_lsa.shape[0]):
    ax.scatter(dtm_lsa[i, 0], dtm_lsa[i, 1], label=f'{i+1}')
ax.legend()

歸一化

這會將一些點推到彼此之上,這將為您提供1相似性。 差異越大,即您添加的新樣本越多,這個問題幾乎肯定會消失。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM