繁体   English   中英

Nutch 抓取脚本

[英]Nutch Crawl Script

运行 Nutch 1.10,我在使用 Nutch 开发人员提供的爬网脚本时遇到问题:

Usage: crawl [-i|--index] [-D "key=value"] <Seed Dir> <Crawl Dir> <Num     Rounds>
    -i|--index      Indexes crawl results into a configured indexer
    -D              A Java property to pass to Nutch calls
    Seed Dir        Directory in which to look for a seeds file
    Crawl Dir       Directory where the crawl/link/segments dirs are saved
    Num Rounds      The number of rounds to run this crawl for
 Example: bin/crawl -i -D solr.server.url=http://localhost:8983/solr/ urls/ TestCrawl/  2

我想知道是否有人可以让我对阅读本文有所了解。 例如:

    -i|--index      **What is the configured indexer? Is this part of Nutch? Or is it an another program like Solr? When I put in -i, what am I doing?**
    -D              **Not sure how these get used in the crawl but the instruction is pretty self-explanatory.**
    Seed Dir        **Self-explanatory but where do I put the directory within Nutch? I created a urls directory (per the instructions) in the apache-nutch-1.10 directory. I've also tried putting it in the apache-nutch-1.10/bin file because that is were the crawl starts from.**
    Crawl Dir       **Is this where the results of the crawl go or is there where the data for the injection to the crawldb goes? If its the latter where do I get said data? The directory starts out empty and never gets filled. Confusing!**
    Num Rounds      **Self-explanatory**

其他问题:爬取的结果去哪儿了? 他们是否必须使用 Solr 核心(或其他一些软件)? 他们可以直接转到目录以便我查看吗? 它们以什么格式出现?

谢谢!

-i :是 Solr/ElasticSearch 等程序。因此,当您指定 -i 选项时,爬网脚本将运行索引作业,否则将跳过它。

Crawl Dir : 是存储爬取数据的目录。 这包括crawldb、segments 和linkdb。 所以基本上所有与爬行相关的数据都放在这里。

爬网的结果进入您指定的 crawlDir。 它存储为序列文件,并且有查看数据的命令。

您可以在 - https://wiki.apache.org/nutch/CommandLineOptions找到它们。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM