简体   繁体   English

合并Lucene指数

[英]Merge index in Lucene

Basically I am new to lucene I have created index by using 70 email documents. 基本上我是lucene的新手我通过使用70个电子邮件文档创建了索引。 First index created with first 29 Documents and then the rest of the 41 documents used for another index creation. 使用前29个文档创建的第一个索引,然后创建用于另一个索引创建的41个文档中的其余文档。

I tried to search in first indexed file by using lucene and it gave me results as i want... But whenever I just try to merge both the indexes it never do this for me.. For index creation 我尝试使用lucene在第一个索引文件中搜索,它给了我想要的结果......但每当我尝试合并两个索引时,它就永远不会为我做这个..对于索引创建

import java.io.BufferedReader;
import java.io.File;
import java.io.FileFilter;
import java.io.FileReader;
import java.io.IOException;

import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;

public class Indexer {

    public static void main(String[] args) throws Exception {
        if (args.length != 0) {
            throw new IllegalArgumentException("Usage: java "
                    + Indexer.class.getName() + " <index dir> <data dir>");
        }
        String indexDir = "docsOPDir"; //1
        String dataDir = "docsDir"; //2
        long start = System.currentTimeMillis();
        Indexer indexer = new Indexer(indexDir);
        int numIndexed;
        try {
            numIndexed = indexer.index(dataDir, new TextFilesFilter());
        } finally {
            indexer.close();
        }
        long end = System.currentTimeMillis();
        System.out.println("Indexing " + numIndexed + " files took "
                + (end - start) + " milliseconds");
    }

    private IndexWriter writer;

    public Indexer(String indexDir) throws IOException {
        File INDEX_DIR = new File(indexDir);

        INDEX_DIR.mkdir();
        Directory dir = FSDirectory.getDirectory(INDEX_DIR);
        writer = new IndexWriter(dir, new StandardAnalyzer(), true);
        writer.setMergeFactor(1000);
        writer.setRAMBufferSizeMB(50);

    }

    public void close() throws IOException {
        writer.close(); //4
    }

    public int index(String dataDir, FileFilter filter) throws Exception {
        File[] files = new File(dataDir).listFiles();
        for (File f : files) {
            System.out.println("Reading File:"+f);
            if (!f.isDirectory() && !f.isHidden() && f.exists() && f.canRead()
                    && (filter == null || filter.accept(f))) {
                indexFile(f);
            }
        }
        return writer.numRamDocs(); //5
    }

    private static class TextFilesFilter implements FileFilter {
        public boolean accept(File path) {
            return !path.getName().toLowerCase() //6
                    .startsWith("541"); //6
        }
    }

    protected Document getDocument(File f) throws Exception {
        Document doc = new Document();
        doc.add(new Field("subject", getSubject(f),Field.Store.YES, Field.Index.TOKENIZED)); //7
        doc.add(new Field("filename", f.getName(), //8
                Field.Store.YES, Field.Index.NO));//8
        doc.add(new Field("fullpath", f.getCanonicalPath(), //9
                Field.Store.YES, Field.Index.NO));//9
        return doc;
    }
     private String getSubject(File f) throws Exception {
            BufferedReader br = new BufferedReader( new FileReader(f));
            String line ;
            while (( line = br.readLine()) != null) {
                if(line.toUpperCase().startsWith("SUBJECT")){
                   return line; 
                }
                }
            return "NO Subject Found";

        }
    private void indexFile(File f) throws Exception {
        System.out.println("Indexing " + f.getCanonicalPath());
        Document doc = getDocument(f);
        writer.addDocument(doc); //10
    }
}

For merging index: 对于合并索引:

File INDEXES_DIR  = new File("\\docsOP2");
        File INDEX_DIR    = new File("\\docs");

        INDEX_DIR.mkdir();

        Date start = new Date();

        try {

            IndexWriter writer = new IndexWriter(INDEX_DIR, 
                                                new StandardAnalyzer(), 
                                                true);
            writer.setMergeFactor(1000);
            writer.setRAMBufferSizeMB(50);

            Directory indexes[] = new Directory[INDEXES_DIR.list().length];

            for (int i = 0; i < INDEXES_DIR.list().length; i++) {
                System.out.println("Adding: " + INDEXES_DIR.list()[i]);
                indexes[i] = FSDirectory.getDirectory(INDEXES_DIR.getAbsolutePath() 
                                                    + "/" + INDEXES_DIR.list()[i]);
                System.out.println(indexes[i]);
            }

            System.out.print("Merging added indexes...");
            writer.addIndexes(indexes);
            System.out.println("done");

            System.out.print("Optimizing index...");
            writer.optimize();
            writer.close();
            System.out.println("done");

            Date end = new Date();
            System.out.println("It took: "+((end.getTime() - start.getTime()) / 1000) 
                                            + "\"");

The code looks correct. 代码看起来正确。 To help you track down the issue, dump the new index to see what it contains. 为了帮助您追踪问题,请转储新索引以查看其中包含的内容。

Here is a GIST with some code: Dump a Lucene index as a XML document 这是一个包含一些代码的GIST: 将Lucene索引转储为XML文档

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM