简体   繁体   中英

Gensim doc2vec sentence tagging

Im trying to understand doc2vec and can I use it to solve my scenario. I want to label sentences with 1 or more tags using TaggedSentences([words], [tags]), but im unsure If my understanding is correct.

so basically, i need this to happen(or am I totally off the mark)

I create 2 TaggedDocuments

TaggedDocument(words=["the", "bird", "flew", "over", "the", "coocoos", "nest", labels=["animal","tree"])
TaggedDocument(words=["this", "car", "is", "over", "one", "million", "dollars", labels=["motor","money"])

I build my model

model = gensim.models.Doc2Vec(documents, dm=0, alpha=0.025, size=20, min_alpha=0.025, min_count=0)

Then I train my model

model.train(documents, total_examples=len(documents), epochs=1)

So when I have all that done, what I expect is when I execute

model.most_similar(positive=["bird", "flew", "over", "nest])

is [animal,tree], but I get

[('the', 0.4732949137687683), 
('million', 0.34103643894195557),
('dollars', 0.26223617792129517),
('one', 0.16558100283145905),
('this', 0.07230066508054733),
('is', 0.012532509863376617),
('cocos', -0.1093338280916214),
('car', -0.13764989376068115)]

UPDATE: when I infer

vec_model = model.Word2Vec.load(os.path.join("save","vec.w2v"))
infer = vec_model.infer_vector(["bird", "flew", "over", "nest"])
print(vec_model.most_similar(positive=[infer], topn=10))

I get

[('bird', 0.5196993350982666),
('car', 0.3320297598838806), 
('the',  0.1573483943939209), 
('one', 0.1546170711517334), 
('million',  0.05099521577358246),
('over', -0.0021460093557834625), 
('is',  -0.02949431538581848),
('dollars', -0.03168443590402603), 
('flew', -0.08121247589588165),
('nest', -0.30139490962028503)]

So the elephant in the room, Is doc2vec what I need to accomplish the above scenario, or should I go back to bed and have a proper think about what Im trying to achieve in life :)

Any help greatly appreciated

It's not clear what your goal is.

Your code examples are a bit muddled; there's no way the TaggedDocument constructions, as currently shown, will result in good text examples. ( words needs to be a list of words, not a string with a bunch of comma-separated tokens.)

If you ask model for similarities, you'll get words – if you want doc-tags, you'll have to ask the model's docvecs sub-property. (That is, model.docvecs.most_similar() .)

Regarding your training parameters, there's no good reason to change the default min_alpha to be equal to the starting-alpha. A min_count=0 , retaining all words, usually makes word2vec/doc2vec vectors worse. And the algorithm typically needs many passes over the data – usually 10 or more – rather than one.

But also, word2vec/doc2vec really needs bulk data to achieve its results – toy-sized tests rarely show the same beneficial properties that are possible with larger datasets.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM