簡體   English   中英

Nutch不會對具有查詢字符串參數的URL進行爬網

[英]Nutch does not crawl URLs with query string parameters

我正在使用Nutch1.9,並嘗試使用單個命令進行爬網。 從輸出中可以看到,返回到返回0條記錄的第二級生成器時。 有人遇到過這個問題嗎? 我從過去的兩天都被困在這里。 搜索了所有可能的選項。 任何線索/幫助將不勝感激。

<br>#######  INJECT   ######<br>
Injector: starting at 2015-04-08 17:36:20 <br>
Injector: crawlDb: crawl/crawldb<br>
Injector: urlDir: urls<br>
Injector: Converting injected urls to crawl db entries.<br>
Injector: overwrite: false<br>
Injector: update: false<br>
Injector: Total number of urls rejected by filters: 0<br>
Injector: Total number of urls after normalization: 1<br>
Injector: Total new urls injected: 1<br>
Injector: finished at 2015-04-08 17:36:21, elapsed: 00:00:01<br>
####  GENERATE  ###<br>
Generator: starting at 2015-04-08 17:36:22<br>
Generator: Selecting best-scoring urls due for fetch.<br>
Generator: filtering: true<br>
Generator: normalizing: true<br>
Generator: topN: 100000<br>
Generator: jobtracker is 'local', generating exactly one partition.<br>
Generator: Partitioning selected urls for politeness.<br>
Generator: segment: crawl/segments/20150408173625<br>
Generator: finished at 2015-04-08 17:36:26, elapsed: 00:00:03<br>
crawl/segments/20150408173625<br>
#### FETCH  ####<br>
Fetcher: starting at 2015-04-08 17:36:26<br>
Fetcher: segment: crawl/segments/20150408173625<br>
Using queue mode : byHost<br>
Fetcher: threads: 10<br>
Fetcher: time-out divisor: 2<br>
QueueFeeder finished: total 1 records + hit by time limit :0<br>
Using queue mode : byHost<br>
fetching https://ifttt.com/recipes/search?q=SmartThings (queue crawl delay=5000ms)<br>
Using queue mode : byHost<br>
Using queue mode : byHost<br>
Thread FetcherThread has no more work available<br>
-finishing thread FetcherThread, activeThreads=1<br>
Using queue mode : byHost<br>
Thread FetcherThread has no more work available<br>
-finishing thread FetcherThread, activeThreads=1<br>
Using queue mode : byHost<br>
Thread FetcherThread has no more work available<br>
-finishing thread FetcherThread, activeThreads=1<br>
Using queue mode : byHost<br>
Thread FetcherThread has no more work available<br>
-finishing thread FetcherThread, activeThreads=1<br>
Using queue mode : byHost<br>
Thread FetcherThread has no more work available<br>
-finishing thread FetcherThread, activeThreads=1<br>
Thread FetcherThread has no more work available<br>
-finishing thread FetcherThread, activeThreads=1<br>
Using queue mode : byHost<br>
Thread FetcherThread has no more work available<br>
-finishing thread FetcherThread, activeThreads=1<br>
Using queue mode : byHost<br>
Thread FetcherThread has no more work available<br>
-finishing thread FetcherThread, activeThreads=1<br>
Using queue mode : byHost<br>
Fetcher: throughput threshold: -1<br>
Thread FetcherThread has no more work available<br>
Fetcher: throughput threshold retries: 5<br>
-finishing thread FetcherThread, activeThreads=1<br>
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1<br>
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1<br>
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1<br>
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1<br>
Thread FetcherThread has no more work available<br>
-finishing thread FetcherThread, activeThreads=0<br>
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=0<br>
-activeThreads=0<br>
Fetcher: finished at 2015-04-08 17:36:33, elapsed: 00:00:06<br>
#### PARSE ####<br>
ParseSegment: starting at 2015-04-08 17:36:33<br>
ParseSegment: segment: crawl/segments/20150408173625<br>
ParseSegment: finished at 2015-04-08 17:36:35, elapsed: 00:00:01<br>
########   UPDATEDB   ##########<br>
CrawlDb update: starting at 2015-04-08 17:36:36<br>
CrawlDb update: db: crawl/crawldb<br>
CrawlDb update: segments: [crawl/segments/20150408173625]<br>
CrawlDb update: additions allowed: true<br>
CrawlDb update: URL normalizing: false<br>
CrawlDb update: URL filtering: false<br>
CrawlDb update: 404 purging: false<br>
CrawlDb update: Merging segment data into db.<br>
CrawlDb update: finished at 2015-04-08 17:36:37, elapsed: 00:00:01<br>
#####  GENERATE  ######<br>
Generator: starting at 2015-04-08 17:36:38<br>
Generator: Selecting best-scoring urls due for fetch.<br>
Generator: filtering: true<br>
Generator: normalizing: true<br>
Generator: topN: 100000<br>
Generator: jobtracker is 'local', generating exactly one partition.<br>
Generator: 0 records selected for fetching, exiting ...<br>
#######   EXTRACT  #########<br>
crawl/segments/20150408173625<br>
#### Segments #####<br>
20150408173625<br>

編輯:所以我檢查了另一個帶有查詢參數的URL( http://queue.acm.org/detail.cfm?id=988409 ),它很好地爬行了...

因此,這意味着它沒有專門抓取我的原始網址: https : //ifttt.com/recipes/search? q = SmartThings & ac =true

我已經嘗試過為這個ifttt域爬網而不使用querystring的url並成功地對其進行爬網...

我認為問題在於使用查詢字符串爬網https網站。 關於這個問題有什么幫助嗎?

默認情況下,帶有查詢參數的鏈接將被忽略或過濾掉。 要啟用帶有參數的爬網URL,請轉至conf / regex-urlfilter.txt並通過在行首添加#來注釋下一行。

# skip URLs containing certain characters as probable queries, etc.
#-[?*!@=]

好吧,我找到了解決方案。 這是我的錯。 ifttt域具有我想要通過robots.txt抓取的特定區域的過濾器

進行爬網之前,只需檢查該網站是否允許其自身進行爬網

通過檢查robots.txt

像這樣: https : //ifttt.com/robots.txt

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM