简体   繁体   English

如何在python中使用斯坦福依赖解析器

[英]how to use stanford dependencies parser in python

How to use Stanford Dependency parser in NLTK I have tired the below code It is not giving any tree structure. 如何在NLTK中使用Stanford Dependency解析器我已经厌倦了下面的代码,它没有给出任何树形结构。 Can please guide, I am new to python and NLTK. 可以请指导,我是python和NLTK的新手。

-----------------------------------------------------------------------------------------

    import os
    sentence = "this is a foo bar i want to parse."

    os.popen("echo '"+sentence+"' > ~/stanfordtemp.txt")
    parser_out = os.popen("~/stanford-parser-full-2014-06-16/lexparser.sh ~/stanfordtemp.txt").readlines()

    bracketed_parse = " ".join( [i.strip() for i in parser_out if (len(i.strip()) > 0) == "("] )
    print bracketed_parse

-----------------------------------------------------------------------------------------
Output:
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [1.3 sec].
Parsing file: /home/stanfordtemp.txt
Parsing [sent. 1 len. 10]: this is a foo bar i want to parse .
Parsed file: /home/stanfordtemp.txt [1 sentences].
Parsed 10 words in 1 sentences (18.05 wds/sec; 1.81 sents/sec).

Have you tried one of the python wrappers listed on the Stanford NLP page? 您是否尝试过Stanford NLP页面上列出的python包装器之一? Komatsu and Castner appear to be up to v3.3 (current version is 3.4). Komatsu和Castner似乎已达到v3.3(当前版本为3.4)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM