简体   繁体   中英

scrapy crawling just 1 level of a web-site

I am using scrapy to crawl all the web pages under a domain.

I have seen this question. But there is no solution. My problem seems to be similar one. My output of crawl command looks like this:

scrapy crawl sjsu2012-02-22 19:41:35-0800 [scrapy] INFO: Scrapy 0.14.1 started (bot: sjsucrawler)
2012-02-22 19:41:35-0800 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2012-02-22 19:41:35-0800 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-02-22 19:41:35-0800 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-02-22 19:41:35-0800 [scrapy] DEBUG: Enabled item pipelines: 
2012-02-22 19:41:35-0800 [sjsu] INFO: Spider opened
2012-02-22 19:41:35-0800 [sjsu] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-02-22 19:41:35-0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-02-22 19:41:35-0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-02-22 19:41:35-0800 [sjsu] DEBUG: Crawled (200) <GET http://cs.sjsu.edu/> (referer: None)
2012-02-22 19:41:35-0800 [sjsu] INFO: Closing spider (finished)
2012-02-22 19:41:35-0800 [sjsu] INFO: Dumping spider stats:
    {'downloader/request_bytes': 198,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 11000,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2012, 2, 23, 3, 41, 35, 788155),
     'scheduler/memory_enqueued': 1,
     'start_time': datetime.datetime(2012, 2, 23, 3, 41, 35, 379951)}
2012-02-22 19:41:35-0800 [sjsu] INFO: Spider closed (finished)
2012-02-22 19:41:35-0800 [scrapy] INFO: Dumping global stats:
    {'memusage/max': 29663232, 'memusage/startup': 29663232}

Problem here is the crawl finds links from first page, but does not visit them. Whats the use of such a crawler.

EDIT :

My crawler code is:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector

class SjsuSpider(BaseSpider):
    name = "sjsu"
    allowed_domains = ["sjsu.edu"]
    start_urls = [
        "http://cs.sjsu.edu/"
    ]

    def parse(self, response):
        filename = "sjsupages"
        open(filename, 'wb').write(response.body)

All of my other settings are default.

I think the best way to do this is by using a Crawlspider. So you have to modify your code to this below to be able to find all links from the first page and visit them:

class SjsuSpider(CrawlSpider):

    name = 'sjsu'
    allowed_domains = ['sjsu.edu']
    start_urls = ['http://cs.sjsu.edu/']
    # allow=() is used to match all links
    rules = [Rule(SgmlLinkExtractor(allow=()), callback='parse_item')]

    def parse_item(self, response):
        x = HtmlXPathSelector(response)

        filename = "sjsupages"
        # open a file to append binary data
        open(filename, 'ab').write(response.body)

If you want to crawl all the links in the website (and not only those in the first level), you have to add a rule to follow every link, so you have to change the rules variable to this one:

rules = [
    Rule(SgmlLinkExtractor(allow=()), follow=True),
    Rule(SgmlLinkExtractor(allow=()), callback='parse_item')
]

I have changed your 'parse' callback to 'parse_item' because of this:

When writing crawl spider rules, avoid using parse as callback, since the CrawlSpider uses the parse method itself to implement its logic. So if you override the parse method, the crawl spider will no longer work.

For more information you can see this: http://doc.scrapy.org/en/0.14/topics/spiders.html#crawlspider

if u are using basespider, in the parse method/ callback, you need to extract your desired urls and return Request objects if you intend to visit these urls.

for url in hxs.select('//a/@href').extract():
            yield Request(url, callback=self.parse)

what parse does is to return you the response and you have to tell what you want to do with it. Its stated in the docs .

Or if u wish to use the CrawlSpider, then u simply define rules for your spider instead.

Just in case this is useful. When the crawler does not work like in this case, make sure you delete the following code from your spider file. This is because the spider is configured to call this method by default if it is declared in the file.

def parse(self, response):
  pass

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM