繁体   English   中英

简单的Scrapy爬虫不关注链接和抓取

[英]Simple Scrapy crawler not following links & scraping

基本上问题在于链接

我将从第1..2..3..4..5 .....页共90页

每页有100个左右的链接

每页都是这种格式

http://www.consumercomplaints.in/lastcompanieslist/page/1
http://www.consumercomplaints.in/lastcompanieslist/page/2
http://www.consumercomplaints.in/lastcompanieslist/page/3
http://www.consumercomplaints.in/lastcompanieslist/page/4

这是正则表达式匹配规则

Rule(LinkExtractor(allow='(http:\/\/www\.consumercomplaints\.in\/lastcompanieslist\/page\/\d+)'),follow=True,callback="parse_data")

我要转到每个页面,然后创建一个Request对象以刮取每个页面中的所有链接

Scrapy每次总共仅抓取179个链接,然后给出finished状态

我究竟做错了什么?

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
import urlparse

class consumercomplaints_spider(CrawlSpider):
    name = "test_complaints"
    allowed_domains = ["www.consumercomplaints.in"]
    protocol='http://'

    start_urls = [
        "http://www.consumercomplaints.in/lastcompanieslist/"
    ]

    #These are the rules for matching the domain links using a regularexpression, only matched links are crawled
    rules = [
        Rule(LinkExtractor(allow='(http:\/\/www\.consumercomplaints\.in\/lastcompanieslist\/page\/\d+)'),follow=True,callback="parse_data")
    ]


    def parse_data(self, response):
        #Get All the links in the page using xpath selector
        all_page_links = response.xpath('//td[@class="compl-text"]/a/@href').extract()

        #Convert each Relative page link to Absolute page link -> /abc.html -> www.domain.com/abc.html and then send Request object
        for relative_link in all_page_links:
            print "relative link procesed:"+relative_link

            absolute_link = urlparse.urljoin(self.protocol+self.allowed_domains[0],relative_link.strip())
            request = scrapy.Request(absolute_link,
                         callback=self.parse_complaint_page)
            return request


        return {}

    def parse_complaint_page(self,response):
        print "SCRAPED"+response.url
        return {}

您将需要使用yield而不是return。

对于每个新的Request对象,请使用yield request而不是return reqeust

查看更多有关产量在这里和他们与理性之间的差异在这里

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM