[英]Nutch 1.11 crawl Issue
I have followed the tutorial and configured nutch to run on Windows 7 using Cygwin and i'm using Solr 5.4.0 to index the data 我已按照本教程进行了操作,并配置了Nuct以便使用Cygwin在Windows 7上运行,并且我正在使用Solr 5.4.0索引数据
But nutch 1.11 is having problem in executing a crawl. 但是在执行抓取时,nutst 1.11存在问题。
Crawl Command $ bin/crawl -i -D solr.server.url= http://127.0.0.1:8983/solr /urls /TestCrawl 2 抓取命令 $ bin / crawl -i -D solr.server.url = http://127.0.0.1:8983/solr / urls / TestCrawl 2
Error/Exception 错误/异常
Injecting seed URLs /apache-nutch-1.11/bin/nutch inject /TestCrawl/crawldb /urls Injector: starting at 2016-01-19 17:11:06 Injector: crawlDb: /TestCrawl/crawldb Injector: urlDir: /urls Injector: Converting injected urls to crawl db entries. 注入种子URL /apache-nutch-1.11/bin/nutch注入/ TestCrawl / crawldb / urls注入器:从2016-01-19 17:11:06注入器:crawlDb:/ TestCrawl / crawldb注入器:urlDir:/ urls注入器:将注入的URL转换为爬网数据库条目。 Injector: java.lang.NullPointerException at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012) at org.apache.hadoop.util.Shell.runCommand(Shell.java:445) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.util.Shell.execCommand(Shell.java:739) at org.apache.hadoop.util.Shell.execCommand(Shell.java:722) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:421) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:281) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) at java.security.AccessController.doPrivileged(Native Metho
注入器:org.apache.hadoop.util处java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)处的java.lang.NullPointerException org.apache.hadoop.util处的Shell.runCommand(Shell.java:445)。 Shell.run(Shell.java:418)在org.apache.hadoop.util.Shell $ ShellCommandExecutor.execute(Shell.java:650)在org.apache.hadoop.util.Shell.execCommand(Shell.java:739)在org.apache.hadoop.fs.RawLocalFileSystem.mkdirs上的org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)上的org.apache.hadoop.util.Shell.execCommand(Shell.java:722) (RawLocalFileSystem.java:java:421)位于org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:281)位于org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125)位于org.apache。在org.apache.hadoop.mapreduce.Job $ 10.run(Job.java:1285)上的hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)在org.apache.hadoop.mapreduce.Job $ 10.run(Job。 java:1282)at java.security.AccessController.doPrivileged(本机方法 d) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833) at org.apache.nutch.crawl.Injector.inject(Injector.java:323) at org.apache.nutch.crawl.Injector.run(Injector.java:379) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.nutch.crawl.Injector.main(Injector.java:369)
d)位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)处的javax.security.auth.Subject.doAs(Subject.java:422)处org.apache.hadoop.mapreduce.Job.submit (Job.java:1282)在org.apache.hadoop.mapred.JobClient $ 1.run(JobClient.java:562)在org.apache.hadoop.mapred.JobClient $ 1.run(JobClient.java:557)在java。 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)处的javax.security.auth.Subject.doAs(Subject.java:422)处的security.AccessController.doPrivileged(本机方法)。 org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)上的hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)在org.apache.hadoop.mapred.JobClient.runJob(JobClient.java: 833)在org.apache.nutch.crawl.Injector.run(Injector.java:379)在org.apache.nutch.crawl.Injector.run(Injector.java:379)在org.apache.hadoop.util.ToolRunner .org(ToolRunner.java:70)在org.apache.nutch.crawl.Injector.main(Injector.java:369)
Error running:
/home/apache-nutch-1.11/bin/nutch inject /TestCrawl/crawldb /urls
Failed with exit value 127.
I can see there are multiple problems with your command, try this: 我可以看到您的命令存在多个问题,请尝试以下操作:
bin/crawl -i -Dsolr.server.url=http://127.0.0.1:8983/solr/core_name path_to_seed crawl 2
The first problem is that there is a space when you pass the solr parameter. 第一个问题是传递solr参数时会有一个空格。 The second problem is that the solr url should include the core name as well.
第二个问题是Solr网址也应包含核心名称。
hadoop-core
jar file is needed when you are working with nutch使用nutch时需要
hadoop-core
jar文件
with nutch 1.11 compatible hadoop-core jar is 0.20.0 与1.11兼容的hadoop-core jar为0.20.0
please download jar from this link : http://www.java2s.com/Code/Jar/h/Downloadhadoop0200corejar.htm
请从此链接下载jar: http : //www.java2s.com/Code/Jar/h/Downloadhadoop0200corejar.htm
paste that jar into
"C:\\cygwin64\\home\\apache-nutch-1.11\\lib"
folder and it will run successfully.将该罐子粘贴到
"C:\\cygwin64\\home\\apache-nutch-1.11\\lib"
文件夹中,它将成功运行。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.