简体   繁体   中英

Lucene indexing - lots of docs/phrases

What approach should I use in indexing following set of files.

Each file contains around 500k lines of characters (400MB) - characters are not words, they are, lets say for sake of question random characters, without spaces.

I need to be able to find each line which contains given 12-character string, for example:

line: AXXXXXXXXXXXXJJJJKJIDJUD....ect up to 200 chars

interesting part: XXXXXXXXXXXX

While searching, I'm only interested in characters 1-13 (so XXXXXXXXXXXX ). After the search I would like to be able to read line containing XXXXXXXXXXXX without looping through the file.

I wrote following poc (simplified for question:

Indexing:

 while ( (line = br.readLine()) != null ) {
        doc = new Document();
        Field fileNameField = new StringField(FILE_NAME, file.getName(), Field.Store.YES);
        doc.add(fileNameField);
        Field characterOffset = new IntField(CHARACTER_OFFSET, charsRead, Field.Store.YES);
        doc.add(characterOffset);
        String id = "";
        try {
            id = line.substring(1, 13);
            doc.add(new TextField(CONTENTS, id, Field.Store.YES));
            writer.addDocument(doc);
        } catch ( IndexOutOfBoundsException ior ) {
            //cut off for sake of question
        } finally {
            //simplified snipped for sake of question. characterOffset is amount of chars to skip which reading a file (ultimately bytes read)
             charsRead += line.length() + 2;

        }
    }

Searching:

RegexpQuery q = new RegexpQuery(new Term(CONTENTS, id), RegExp.NONE); //cause id can be a regexp concernign 12char string

TopDocs results = searcher.search(q, Integer.MAX_VALUE);
ScoreDoc[] hits = results.scoreDocs;
int numTotalHits = results.totalHits;
Map<String, Set<Integer>> fileToOffsets = new HashMap<String, Set<Integer>>();

for ( int i = 0; i < numTotalHits; i++ ) {
    Document doc = searcher.doc(hits[i].doc);
    String fileName = doc.get(FILE_NAME);
    if ( fileName != null ) {
        String foundIds = doc.get(CONTENTS);
        Set<Integer> offsets = fileToOffsets.get(fileName);
        if ( offsets == null ) {
            offsets = new HashSet<Integer>();
            fileToOffsets.put(fileName, offsets);
        }
        String offset = doc.get(CHARACTER_OFFSET);
        offsets.add(Integer.parseInt(offset));
    }
}

The problem with this approach is that, it will create one doc per line.

Can you please give me hints how to approach this problem with lucene and if lucene is a way to go here?

Instead of adding a new document for each iteration, use the same document and keep adding fields with the same name to it, something like:

Document doc = new Document();
Field fileNameField = new StringField(FILE_NAME, file.getName(), Field.Store.YES);
doc.add(fileNameField);
String id;
while ( (line = br.readLine()) != null ) {
    id = "";
    try {
        id = line.substring(1, 13);
        doc.add(new TextField(CONTENTS, id, Field.Store.YES));
        //What is this (characteroffset) field for?
        Field characterOffset = new IntField(CHARACTER_OFFSET, bytesRead, Field.Store.YES);
        doc.add(characterOffset);
    } catch ( IndexOutOfBoundsException ior ) {
        //cut off
    } finally {
        if ( "".equals(line) ) {
            bytesRead += 1;
        } else {
            bytesRead += line.length() + 2;
        }
    }
}
writer.addDocument(doc);

This will add the id from each line as a new term in the same field. The same query should continue to work.

I'm not really sure what to make of your use of the CharacterOffset field, though. Each value will, as with the ids, be appended to the end of the field as another term. It won't be directly associated with a particular term, aside from being, one would assume, the same number of tokens into the field. If you need to retreive a particular line, rather than the contents of the whole file, your current approach of indexing line by line might be the most reasonable.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM