繁体   English   中英

Python Scrapy嵌套页面只需要最里面的页面中的项目

[英]Python Scrapy nested pages only need items from innermost page

我在具有嵌套页面的网站上练习scrapy,我只需要抓取最内层页面的内容,但是有一种方法可以使用许多解析函数将数据从parse函数解析到最内层页面到主parse函数打开页面,但仅从上一个解析函数中获取项目,并转移到主解析函数

这是我尝试过的

try:
    import scrapy
    from urlparse import urljoin

except ImportError:
    print "\nERROR IMPORTING THE NESSASARY LIBRARIES\n"



class CanadaSpider(scrapy.Spider):
    name = 'CananaSpider'
    start_urls = ['http://www.canada411.ca']


    #PAGE 1 OF THE NESTED WEBSITE GETTING LINK AND JOING WITH THE MAIN LINK AND VISITING THE PAGE
    def parse(self, response):
        SET_SELECTOR = '.c411AlphaLinks.c411NoPrint ul li'
        for PHONE in response.css(SET_SELECTOR):
            selector = 'a ::attr(href)'
            try:
                momo = urljoin('http://www.canada411.ca', PHONE.css(selector).extract_first())

                #PASSING A DICTIONARYAS THE ITEM
                pre  = {}
                post = scrapy.Request(momo, callback=self.parse_pre1, meta={'item': pre})
                yield pre
            except:
                pass   

#PAGE 2 OF THE NESTED WEBSITE


    def parse_pre1(self, response):

        #RETURNING THE SAME ITEM 
        item = response.meta["item"]
        SET_SELECTOR = '.clearfix.c411Column.c411Column3 ul li'

        for PHONE in response.css(SET_SELECTOR):
            selector = 'a ::attr(href)'
            momo = urljoin('http://www.canada411.ca', PHONE.css(selector).extract_first())
            pre = scrapy.Request(momo, callback=self.parse_pre1, meta={'page_2': item})
            yield pre

    def parse_info(self, response):

        #HERE I AM SCRAPING THE DATA
        item = response.meta["page_2"]
        name = '.vcard__name'
        address = '.c411Address.vcard__address'
        ph = '.vcard.label'

        item['name'] = response.css(name).extract_first()
        item['address'] = response.css(address).extract_first()
        item['phoneno'] = response.css(ph).extract_first()
        return item 

我继承了该物品我在做什么错?

parse您在post实例中产生pre同时,您也应该使用Scrapy.Item类,而不是dict。

  def parse(self, response):
        SET_SELECTOR = '.c411AlphaLinks.c411NoPrint ul li'
        for PHONE in response.css(SET_SELECTOR):
            selector = 'a ::attr(href)'
            try:
                momo = urljoin('http://www.canada411.ca', PHONE.css(selector).extract_first())

                #PASSING A DICTIONARYAS THE ITEM
                pre  = {} # This should be an instance of Scrapy.Item  
                post = scrapy.Request(momo, callback=self.parse_pre1, meta={'item': pre})
                yield post
            except:
                pass   

parse_pre1您再次设置为回调parse_pre1 ,我认为您的意思是parse_info

def parse_pre1(self, response):

    #RETURNING THE SAME ITEM 
    item = response.meta["item"]
    SET_SELECTOR = '.clearfix.c411Column.c411Column3 ul li'

    for PHONE in response.css(SET_SELECTOR):
        selector = 'a ::attr(href)'
        momo = urljoin('http://www.canada411.ca', PHONE.css(selector).extract_first())
        pre = scrapy.Request(momo, callback=self.parse_info, meta={'page_2': item})
        yield pre

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM