简体   繁体   中英

How do I Index PDF files and search for keywords?

What I have is a bunch of PDFs (few 100s). They don't have a proper structure nor do they have particular fields. All they have is lot of text.

What I am trying to do :

Index the PDFs and search for some keywords against the index. I am interested in finding if that particular keyword is in the PDF doc and if it is, I want the line where the keyword is found. If I searched for 'Google' in a PDF doc that has that term, I would like to see 'Google is a great search engine' which is the line in the PDF.

How I decided to do :

Either use SOLR or Whoosh but SOLR is looking good for inbuilt PDF support. I prefer to code in Python and Sunburst is a wrapper on SOLR which I like. SOLR's sample/example project has some price comparision based schema file. Now I am not sure if I can use SOLR to answer my problem.

What do you guys suggest? Any input is much appreciated.

I think Solr fits your needs.

The "Highlighting" feature is what you are looking for.. For that you have to index and to store the documents in lucene index.

The highlighting feature returns a snipped, where the searched text is marked.

Look at this: http://wiki.apache.org/solr/HighlightingParameters

Another offline/standalone solution:

I once solved this by converting the PDF files to text with utilities as pdftotext ( pdftohtml would also work I guess), generating a 'cache' of some sorts. Then using some grep I searched the text file cache for keywords.

This is slightly different from your proposed solution, but I can imagine you can call this from Python as well.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM