简体   繁体   English

如何使用Stanford Parser在Python中正确解析中文文本?

[英]How to use Stanford Parser to parse Chinese texts correctly in Python?

I am employing Stanford Parser to parse Chinese texts. 我正在雇用斯坦福解析器来解析中文文本。 I want to extract the Context-free Grammar Production Rules from the input Chinese texts. 我想从输入的中文文本中提取上下文无关的语法产生规则。

I set my environment just as Stanford Parser and NLTK . 我将环境设置为Stanford Parser和NLTK

My code is below: 我的代码如下:

from nltk.parse import stanford
parser = stanford.StanfordParser(path_to_jar='/home/stanford-parser-full-2013-11-12/stanford-parser.jar', 
                                 path_to_models_jar='/home/stanford-parser-full-2013-11-12/stanford-parser-3.3.0-models.jar',
                                 model_path="/home/stanford-parser-full-2013-11-12/chinesePCFG.ser.gz",encoding='utf8')

text = '我 对 这个 游戏 有 一 点 上瘾。'
sentences = parser.raw_parse_sents(unicode(text, encoding='utf8'))

However,when I try to 但是,当我尝试

print sentences

I get 我懂了

[Tree('ROOT', [Tree('IP', [Tree('NP', [Tree('PN', ['\u6211'])])])]), Tree('ROOT', [Tree('IP', [Tree('VP', [Tree('VA', ['\u5bf9'])])])]), Tree('ROOT', [Tree('IP', [Tree('NP', [Tree('PN', ['\u8fd9'])])])]), Tree('ROOT', [Tree('IP', [Tree('VP', [Tree('QP', [Tree('CLP', [Tree('M', ['\u4e2a'])])])])])]), Tree('ROOT', [Tree('IP', [Tree('VP', [Tree('VV', ['\u6e38'])])])]), Tree('ROOT', [Tree('FRAG', [Tree('NP', [Tree('NN', ['\u620f'])])])]), Tree('ROOT', [Tree('IP', [Tree('VP', [Tree('VE', ['\u6709'])])])]), Tree('ROOT', [Tree('FRAG', [Tree('QP', [Tree('CD', ['\u4e00'])])])]), Tree('ROOT', [Tree('IP', [Tree('VP', [Tree('VV', ['\u70b9'])])])]), Tree('ROOT', [Tree('IP', [Tree('VP', [Tree('VV', ['\u4e0a'])])])]), Tree('ROOT', [Tree('FRAG', [Tree('NP', [Tree('NN', ['\u763e'])])])]), Tree('ROOT', [Tree('IP', [Tree('NP', [Tree('PU', ['\u3002'])])])])]

in which, Chinese words are divided separately from each other. 其中,中文单词彼此分开。 There should be 9 subtrees but in fact 12 subtrees are returned. 应该有9个子树,但实际上返回了12个子树。 Could anyone show me what the problem is? 谁能告诉我问题出在哪里?

Continue, I try to collect all Context-free Grammar Production Rules from it. 继续,我尝试从中收集所有与上下文无关的语法产生规则。

for subtree in sentences:
    for production in subtree.productions():
        lst.append(production)
print lst

[ROOT -> IP, IP -> NP, NP -> PN, PN -> '\u6211', ROOT -> IP, IP -> VP, VP -> VA, VA -> '\u5bf9', ROOT -> IP, IP -> NP, NP -> PN, PN -> '\u8fd9', ROOT -> IP, IP -> VP, VP -> QP, QP -> CLP, CLP -> M, M -> '\u4e2a', ROOT -> IP, IP -> VP, VP -> VV, VV -> '\u6e38', ROOT -> FRAG, FRAG -> NP, NP -> NN, NN -> '\u620f', ROOT -> IP, IP -> VP, VP -> VE, VE -> '\u6709', ROOT -> FRAG, FRAG -> QP, QP -> CD, CD -> '\u4e00', ROOT -> IP, IP -> VP, VP -> VV, VV -> '\u70b9', ROOT -> IP, IP -> VP, VP -> VV, VV -> '\u4e0a', ROOT -> FRAG, FRAG -> NP, NP -> NN, NN -> '\u763e', ROOT -> IP, IP -> NP, NP -> PU, PU -> '\u3002'] 

But still Chinese words are divided separately. 但是,中文单词还是分开划分的。

Since I do not have much knowledge on Java, I have to use Python interface to implement my task.I really need help from stackoverflow community. 由于我不太了解Java,因此必须使用Python接口来实现我的任务,我确实需要Stackoverflow社区的帮助。 Could anyone help me with it? 有人可以帮我吗?

I have found the solution: use parser.raw_parse instead of parser.raw_parse_sents will solve the problem. 我找到了解决方案:使用parser.raw_parse代替parser.raw_parse_sents将解决此问题。 Because parser.raw_parse_sents is used for list. 因为parser.raw_parse_sents用于列表。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM