简体   繁体   中英

LDA topic modeling input data

I am new to python. I just started working on a project to use LDA topic modeling on tweets. I am trying the the following code:

This example uses an online dataset . I have a csv file that includes the tweets that I need to use. Can anybody tell me how can I use my local file? How should I make my own vocab and titles?

I couldn't find a tutorial that explains how to prepare materials for the LDA. They all assume that you already know how to do so.

 from __future__ import division, print_function import numpy as np import lda import lda.datasets # document-term matrix X = lda.datasets.load_reuters() print("type(X): {}".format(type(X))) print("shape: {}\\n".format(X.shape)) # the vocab vocab = lda.datasets.load_reuters_vocab() print("type(vocab): {}".format(type(vocab))) print("len(vocab): {}\\n".format(len(vocab))) # titles for each story titles = lda.datasets.load_reuters_titles() print("type(titles): {}".format(type(titles))) print("len(titles): {}\\n".format(len(titles))) doc_id = 0 word_id = 3117 print("doc id: {} word id: {}".format(doc_id, word_id)) print("-- count: {}".format(X[doc_id, word_id])) print("-- word : {}".format(vocab[word_id])) print("-- doc : {}".format(titles[doc_id])) model = lda.LDA(n_topics=20, n_iter=500, random_state=1) model.fit(X) topic_word = model.topic_word_ print("type(topic_word): {}".format(type(topic_word))) print("shape: {}".format(topic_word.shape)) for n in range(5): sum_pr = sum(topic_word[n,:]) print("topic: {} sum: {}".format(n, sum_pr)) n = 5 for i, topic_dist in enumerate(topic_word): topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n+1):-1] print('*Topic {}\\n- {}'.format(i, ' '.join(topic_words))) doc_topic = model.doc_topic_ print("type(doc_topic): {}".format(type(doc_topic))) print("shape: {}".format(doc_topic.shape)) 

I know this comes a bit late, but hope it helps.You firstly have to understand that LDA is applicable on the DTM (Document Term Matrix) only. So, I propose you run the following steps:

  1. Load your csv file
  2. Extract the requisite tweets from the file
  3. Clean the data
  4. Create a dictionary containing each word of the corpus generated
  5. Build a TDM structure
  6. Fit the structure to your data file
  7. Obtain the vocabulary - the TDM features (words)
  8. Continue using the code above

Here, can provide this code to help you get started -

token_dict = {}

for i in range(len(txt1)):
    token_dict[i] = txt1[i]

len(token_dict)


print("\n Build DTM")
%time tf = CountVectorizer(stop_words='english')

print("\n Fit DTM")
%time tfs1 = tf.fit_transform(token_dict.values())

# set the number of topics to look for
num = 8

model = lda.LDA(n_topics=num, n_iter=500, random_state=1)

# we fit the DTM not the TFIDF to LDA
print("\n Fit LDA to data set")
%time model.fit_transform(tfs1)

print("\n Obtain the words with high probabilities")
%time topic_word = model.topic_word_  # model.components_ also works

print("\n Obtain the feature names")
%time vocab = tf.get_feature_names()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM