簡體   English   中英

使用Stanford CoreNLP解析共享 - 無法加載解析器模型

[英]Resolve coreference using Stanford CoreNLP - unable to load parser model

我想做一個非常簡單的工作:給一個包含代詞的字符串,我想解決它們。

例如,我想把這句話改為“瑪麗有一只小羊羔。她很可愛。” 在“瑪麗有一只小羊羔。瑪麗很可愛。”

我曾嘗試使用Stanford CoreNLP。 但是,我似乎無法啟動解析器。 我使用Eclipse在項目中導入了所有包含的jar,並且我已經為JVM(-Xmx3g)分配了3GB。

錯誤非常尷尬:

線程“main”中的異常java.lang.NoSuchMethodError:edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava / lang / String; [Ljava / lang / String;)Ledu / stanford / nlp / parser / lexparser / LexicalizedParser;

我不明白L的來源,我認為這是我問題的根源......這很奇怪。 我試圖進入源文件,但沒有錯誤的引用。

碼:

import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefChainAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefGraphAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.dcoref.CorefChain;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.Tree;
import edu.stanford.nlp.semgraph.SemanticGraph;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.util.IntTuple;
import edu.stanford.nlp.util.Pair;
import edu.stanford.nlp.util.Timing;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import java.util.Properties;

public class Coref {

/**
 * @param args the command line arguments
 */
public static void main(String[] args) throws IOException, ClassNotFoundException {
    // creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution 
    Properties props = new Properties();
    props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

    // read some text in the text variable
    String text = "Mary has a little lamb. She is very cute."; // Add your text here!

    // create an empty Annotation just with the given text
    Annotation document = new Annotation(text);

    // run all Annotators on this text
    pipeline.annotate(document);

    // these are all the sentences in this document
    // a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
    List<CoreMap> sentences = document.get(SentencesAnnotation.class);

    for(CoreMap sentence: sentences) {
      // traversing the words in the current sentence
      // a CoreLabel is a CoreMap with additional token-specific methods
      for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
        // this is the text of the token
        String word = token.get(TextAnnotation.class);
        // this is the POS tag of the token
        String pos = token.get(PartOfSpeechAnnotation.class);
        // this is the NER label of the token
        String ne = token.get(NamedEntityTagAnnotation.class);       
      }

      // this is the parse tree of the current sentence
      Tree tree = sentence.get(TreeAnnotation.class);
      System.out.println(tree);

      // this is the Stanford dependency graph of the current sentence
      SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
    }

    // This is the coreference link graph
    // Each chain stores a set of mentions that link to each other,
    // along with a method for getting the most representative mention
    // Both sentence and token offsets start at 1!
    Map<Integer, CorefChain> graph = 
      document.get(CorefChainAnnotation.class);
    System.out.println(graph);
  }
}

完整堆棧跟蹤:

添加注釋器標記大小添加注釋器ssplit添加注釋器pos加載POS模型[edu / stanford / nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger] ...從訓練有素的標記器edu / stanford /加載默認屬性nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger從edu / stanford / nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger中讀取POS標記模型...完成[2.1秒]。 完成[2.2秒]。 添加注釋器引理添加注釋器從edu / stanford / nlp / models / ner / english.all.3class.distsim.crf.ser.gz加載分類器...完成[4.0秒]。 從edu / stanford / nlp / models / ner / english.muc.distsim.crf.ser.gz加載分類器...完成[3.0秒]。 從edu / stanford / nlp / models / ner / english.conll.distsim.crf.ser.gz加載分類器...完成[3.3秒]。 添加注釋器解析線程“main”中的異常java.lang.NoSuchMethodError:edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava / lang / String; [Ljava / lang / String;)Ledu / stanford / nlp / parser / lexparser / LexicalizedParser; at edu.stanford.nlp.pipeline.ParserAnnotator.loadModel(ParserAnnotator.java:115)at edu.stanford.nlp.pipeline.ParserAnnotator。(ParserAnnotator.java:64)at edu.stanford.nlp.pipeline.StanfordCoreNLP $ 12.create (StanfordCoreNLP.java:603)edu.stanford.nlp.pipeline.StanfordCoreNLP $ 12.create(StanfordCoreNLP.java:585)at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:62)at edu.stanford .nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:329)at edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:196)at edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:186) )在Coref.main的edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:178)(Coref.java:41)

是的,自Java 1.0以來,L只是一個奇怪的Sun事物。

LexicalizedParser.loadModel(String, String ...)是添加到解析器的新方法,該方法未找到。 我懷疑這意味着你的類路徑中有另一個版本的解析器正在被使用。

試試這個:在任何IDE外面的shell中,給出這些命令(適當地給出stanford-corenlp的路徑,並改變:to;如果在Windows上:

javac -cp ".:stanford-corenlp-2012-04-09/*" Coref.java
java -mx3g -cp ".:stanford-corenlp-2012-04-09/*" Coref

解析器加載,你的代碼正確運行 - 只需要添加一些打印語句,這樣你就可以看到它做了什么:-)。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM