简体   繁体   English

使用Lucene使用SnowballAnalyzer清理文本以索引文本文档时发生异常

[英]Exception when indexing text documents with Lucene, using SnowballAnalyzer for cleaning up

I am indexing the documents with Lucene and am trying to apply the SnowballAnalyzer for punctuation and stopword removal from text .. I keep getting the following error :( 我正在用Lucene索引文档,并尝试将SnowballAnalyzer应用到文本中的标点符号和停用词删除中。.我不断收到以下错误消息:(

IllegalAccessError: tried to access method org.apache.lucene.analysis.Tokenizer.(Ljava/io/Reader;)V from class org.apache.lucene.analysis.snowball.SnowballAnalyzer IllegalAccessError:尝试从类org.apache.lucene.analysis.snowball.SnowballAnalyzer访问方法org.apache.lucene.analysis.Tokenizer。(Ljava / io / Reader;)V

Here is the code, I would very much appreciate help!!!! 这是代码,非常感谢您的帮助!!!! I am new with this.. 我是新来的。

public class Indexer { 公共类索引器{

private Indexer(){};

private String[] stopWords = {....};

private String indexName;
private IndexWriter iWriter;
private static String FILES_TO_INDEX = "/Users/ssi/forindexing";

public static void main(String[] args) throws   Exception {
  Indexer m = new Indexer();
  m.index("./newindex");
}


public void index(String indexName) throws Exception {
  this.indexName = indexName;

  final File docDir = new File(FILES_TO_INDEX); 

  if(!docDir.exists() || !docDir.canRead()){
        System.err.println("Something wrong... " + docDir.getPath());
        System.exit(1);
    }

    Date start = new Date();


        PerFieldAnalyzerWrapper analyzers = new PerFieldAnalyzerWrapper(new SimpleAnalyzer());          
        analyzers.addAnalyzer("text", new SnowballAnalyzer("English", stopWords));
        Directory directory = FSDirectory.open(new File(this.indexName));
        IndexWriter.MaxFieldLength maxLength = IndexWriter.MaxFieldLength.UNLIMITED;

        iWriter = new IndexWriter(directory, analyzers, true, maxLength);

        System.out.println("Indexing to dir..........." + indexName);

        if(docDir.isDirectory()){
            File[] files = docDir.listFiles();
            if(files != null){
                for (int i = 0; i < files.length; i++) {
                    try {
                              indexDocument(files[i]);
                          }catch (FileNotFoundException fnfe){
                            fnfe.printStackTrace();
                        }
            }

        }
        }


System.out.println("Optimizing...... ");
iWriter.optimize();
iWriter.close();
Date end = new Date();
System.out.println("Time to index was" + (end.getTime()-start.getTime()) + "miliseconds");  

} }

private void indexDocument(File someDoc) throws IOException { 私有无效indexDocument(File someDoc)抛出IOException {

Document doc = new Document();
Field name = new Field("name", someDoc.getName(), Field.Store.YES, Field.Index.ANALYZED);
Field text = new Field("text",  new FileReader(someDoc), Field.TermVector.WITH_POSITIONS_OFFSETS);
doc.add(name);
doc.add(text);


iWriter.addDocument(doc);

} } }}

This says that one Lucene class is inconsistent with another Lucene class -- one is accessing a member of the other that it can't. 这表示一个Lucene类与另一个Lucene类不一致–一个正在访问的另一个不能访问。 This strongly suggests you have two different and incompatible versions of Lucene in your classpath somehow. 这强烈建议您以某种方式在类路径中有两个不同且不兼容的Lucene版本。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM