繁体   English   中英

Scrapy start_urls

[英]Scrapy start_urls

教程的脚本 (如下)包含两个start_urls

from scrapy.spider import Spider
from scrapy.selector import Selector

from dirbot.items import Website

class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
    ]

    def parse(self, response):
        """
        The lines below is a spider contract. For more info see:
        http://doc.scrapy.org/en/latest/topics/contracts.html
        @url http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/
        @scrapes name
        """
        sel = Selector(response)
        sites = sel.xpath('//ul[@class="directory-url"]/li')
        items = []

        for site in sites:
            item = Website()
            item['name'] = site.xpath('a/text()').extract()
            item['url'] = site.xpath('a/@href').extract()
            item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
            items.append(item)

        return items

但为什么它只刮掉这两个网页? 我看到allowed_domains = ["dmoz.org"]但是这两个页面还包含指向dmoz.org域内其他页面的链接! 为什么它也不刮它们?

start_urls类属性包含start urls - 仅此而已。 如果你已经提取了你要抓取的其他页面的网址 - 通过[另一个]回调来parse回调相应的请求:

class Spider(BaseSpider):

    name = 'my_spider'
    start_urls = [
                'http://www.domain.com/'
    ]
    allowed_domains = ['domain.com']

    def parse(self, response):
        '''Parse main page and extract categories links.'''
        hxs = HtmlXPathSelector(response)
        urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract()
        for url in urls:
            url = urlparse.urljoin(response.url, url)
            self.log('Found category url: %s' % url)
            yield Request(url, callback = self.parseCategory)

    def parseCategory(self, response):
        '''Parse category page and extract links of the items.'''
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract()
        for link in links:
            itemLink = urlparse.urljoin(response.url, link)
            self.log('Found item link: %s' % itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

    def parseItem(self, response):
        ...

如果您仍想自定义创建启动请求,请覆盖方法BaseSpider.start_requests()

start_urls包含蜘蛛开始爬行的链接。 如果要递归爬网,则应使用crawlspider并为其定义规则。 http://doc.scrapy.org/en/latest/topics/spiders.html以此为例。

该类没有rules属性。 查看http://readthedocs.org/docs/scrapy/en/latest/intro/overview.html并搜索“规则”以查找示例。

如果您在回调中使用BaseSpider ,则必须自己提取所需的URL并返回Request对象。

如果您使用CrawlSpider ,则链接提取将由规则和与规则关联的SgmlLinkExtractor处理。

如果您使用规则来跟踪链接(已经在scrapy中实现),蜘蛛也会刮掉它们。 我希望有帮助......

    from scrapy.contrib.spiders import BaseSpider, Rule
    from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
    from scrapy.selector import HtmlXPathSelector


    class Spider(BaseSpider):
        name = 'my_spider'
        start_urls = ['http://www.domain.com/']
        allowed_domains = ['domain.com']
        rules = [Rule(SgmlLinkExtractor(allow=[], deny[]), follow=True)]

     ...

你没有编写函数来处理url你想要得到的东西。还有两种方法来解析。使用规则(crawlspider)2:编写处理新urls的函数。并将它们放在回调函数中。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM