简体   繁体   English

Lucene索引-很多文档/词组

[英]Lucene indexing - lots of docs/phrases

What approach should I use in indexing following set of files. 在索引以下文件集时应使用哪种方法。

Each file contains around 500k lines of characters (400MB) - characters are not words, they are, lets say for sake of question random characters, without spaces. 每个文件包含约50万行字符(400MB)-字符不是单词,而是单词,请问是随机字符,没有空格。

I need to be able to find each line which contains given 12-character string, for example: 我需要能够找到包含给定12个字符的字符串的每一行,例如:

line: AXXXXXXXXXXXXJJJJKJIDJUD....ect up to 200 chars 行:AXXXXXXXXXXXXJJJJKJIDJUD ....最多200个字符

interesting part: XXXXXXXXXXXX 有趣的部分: XXXXXXXXXXXX

While searching, I'm only interested in characters 1-13 (so XXXXXXXXXXXX ). 搜索时,我只对字符1-13(即XXXXXXXXXXXX )感兴趣。 After the search I would like to be able to read line containing XXXXXXXXXXXX without looping through the file. 搜索后,我希望能够读取包含XXXXXXXXXXXX的行而不会循环浏览文件。

I wrote following poc (simplified for question: 我写了以下poc(简化为问题:

Indexing: 索引:

 while ( (line = br.readLine()) != null ) {
        doc = new Document();
        Field fileNameField = new StringField(FILE_NAME, file.getName(), Field.Store.YES);
        doc.add(fileNameField);
        Field characterOffset = new IntField(CHARACTER_OFFSET, charsRead, Field.Store.YES);
        doc.add(characterOffset);
        String id = "";
        try {
            id = line.substring(1, 13);
            doc.add(new TextField(CONTENTS, id, Field.Store.YES));
            writer.addDocument(doc);
        } catch ( IndexOutOfBoundsException ior ) {
            //cut off for sake of question
        } finally {
            //simplified snipped for sake of question. characterOffset is amount of chars to skip which reading a file (ultimately bytes read)
             charsRead += line.length() + 2;

        }
    }

Searching: 搜索:

RegexpQuery q = new RegexpQuery(new Term(CONTENTS, id), RegExp.NONE); //cause id can be a regexp concernign 12char string

TopDocs results = searcher.search(q, Integer.MAX_VALUE);
ScoreDoc[] hits = results.scoreDocs;
int numTotalHits = results.totalHits;
Map<String, Set<Integer>> fileToOffsets = new HashMap<String, Set<Integer>>();

for ( int i = 0; i < numTotalHits; i++ ) {
    Document doc = searcher.doc(hits[i].doc);
    String fileName = doc.get(FILE_NAME);
    if ( fileName != null ) {
        String foundIds = doc.get(CONTENTS);
        Set<Integer> offsets = fileToOffsets.get(fileName);
        if ( offsets == null ) {
            offsets = new HashSet<Integer>();
            fileToOffsets.put(fileName, offsets);
        }
        String offset = doc.get(CHARACTER_OFFSET);
        offsets.add(Integer.parseInt(offset));
    }
}

The problem with this approach is that, it will create one doc per line. 这种方法的问题在于,每行将创建一个文档。

Can you please give me hints how to approach this problem with lucene and if lucene is a way to go here? 您能给我提示如何使用lucene解决这个问题吗,如果lucene是一种解决方法?

Instead of adding a new document for each iteration, use the same document and keep adding fields with the same name to it, something like: 不必为每次迭代添加新文档,而是使用相同的文档并继续为其添加具有相同名称的字段,例如:

Document doc = new Document();
Field fileNameField = new StringField(FILE_NAME, file.getName(), Field.Store.YES);
doc.add(fileNameField);
String id;
while ( (line = br.readLine()) != null ) {
    id = "";
    try {
        id = line.substring(1, 13);
        doc.add(new TextField(CONTENTS, id, Field.Store.YES));
        //What is this (characteroffset) field for?
        Field characterOffset = new IntField(CHARACTER_OFFSET, bytesRead, Field.Store.YES);
        doc.add(characterOffset);
    } catch ( IndexOutOfBoundsException ior ) {
        //cut off
    } finally {
        if ( "".equals(line) ) {
            bytesRead += 1;
        } else {
            bytesRead += line.length() + 2;
        }
    }
}
writer.addDocument(doc);

This will add the id from each line as a new term in the same field. 这会将来自每一行的ID作为新项添加到同一字段中。 The same query should continue to work. 相同的查询应继续工作。

I'm not really sure what to make of your use of the CharacterOffset field, though. 不过,我不确定如何使用CharacterOffset字段。 Each value will, as with the ids, be appended to the end of the field as another term. 每个值将与id一样,作为另一个术语附加到字段的末尾。 It won't be directly associated with a particular term, aside from being, one would assume, the same number of tokens into the field. 它不会与某个特定术语直接相关联,除了会(假设)字段中具有相同数量的标记外。 If you need to retreive a particular line, rather than the contents of the whole file, your current approach of indexing line by line might be the most reasonable. 如果您需要检索特定的行而不是整个文件的内容,则当前对行进行索引的方法可能是最合理的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM