简体   繁体   中英

How to use WordNet similarity perl modules in Python?

I want to use perl modules from WordNet::Similarity package to calculate semantic relatedness(Hirst-St Onge, Lexical chains, etc.) between texts. Does anybody have idea about how to use them in python?

NLTK is the Python interface with WordNet .

Check out section 5 on this page for similarity. You will also have to install the module, the instructions for that are here .

You need to use nltk.wordnet.corpus. You should have a synset method available that will return what you need. Basically the wordnet corpus from nltk offers: words, synsets, lemmas, verb frames, similarity etc. You can check the extensive howto for nltk for code and approach samples.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM