简体   繁体   English

Scrapy嵌套页面爬行

[英]Scrapy nested page crawling

I am struggling with nested page crawling. 我正在努力进行嵌套页面抓取。

I only get the items as number as the first crawled page item count. 我只将项目作为第一个已爬网页面项目的编号。

The site structure will be like this. 网站结构将是这样的。

  1. Crawl Brands - Brand links 抓取品牌 - 品牌链接
  2. Using Brand links go and crawl Models and Model links 使用品牌链接去抓取模型和模型链接
  3. Using Model links go and crawl a specific announcement and its attributes. 使用模型链接可以抓取特定的公告及其属性。

Lets say Brand A has 2 Models and in first model there are 11 announcements, in second model there are 9. Brand B has 3 Models and each model has 5 announcements. 让我们说品牌A有2个型号,第一个型号有11个公告,第二个型号有9个。品牌B有3个型号,每个型号有5个公告。

In the example above I need to get each announcement as separate item (total 35), but instead of that I get items number as Brands like Brand A with first announcement, then Brand B with first announcement. 在上面的示例中,我需要将每个公告作为单独的项目(总共35个),但我得到的项目编号为品牌A的品牌,第一个公告,然后是品牌B的第一个公告。

class SiteSpider(CrawlSpider):
log.start(logfile="log.txt", loglevel="DEBUG", logstdout=None)
name = "site"
#download_delay = 2
allowed_domains = ['site.com']
start_urls = ['http://www.site.com/search.php?c=1111']
items = {}


def parse(self, response):
    sel = Selector(response)
    #requests =[]
    brands = sel.xpath("//li[@class='class_11']")
    for brand in brands:
        item = SiteItem()
        url = brand.xpath('a/@href')[0].extract()
        item['marka'] = brand.xpath("a/text()")[0].extract()
        item['marka_link'] =  brand.xpath('a/@href')[0].extract()
        request = Request("http://www.site.com"+url,callback=self.parse_model, meta={'item':item})
        # requests.append(request)
        #
        yield request

def parse_model(self, response):
    sel = Selector(response)
    models = sel.xpath("//li[@class='class_12']")
    for model in models:

        item = SiteUtem(response.meta["item"])
        url2 = model.xpath('a/@href')[0].extract()
        item ['model'] = model.xpath("a/text()")[0].extract()
        item ['model_link'] = url2

    return item

Could you please help this noobie with pseudo code to implement this? 你可以用伪代码帮助这个noobie实现这个吗? I am making a mistake at foundation level I guess. 我猜我在基础水平上犯了一个错误。

in your parse_model you have a loop that create items but not yielding them, try to change it to: 在你的parse_model你有一个创建项但不产生它们的循环,尝试将其更改为:

def parse_model(self, response):
    sel = Selector(response)
    models = sel.xpath("//li[@class='class_12']")
    for model in models:

        item = SiteUtem(response.meta["item"])
        url2 = model.xpath('a/@href')[0].extract()
        item ['model'] = model.xpath("a/text()")[0].extract()
        item ['model_link'] = url2

        yield item

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM