简体   繁体   中英

scrapy not running callback function

I would really appreciate help in my code, it should print.

URL is: http://en.wikipedia.org/wiki/Python_%28programming_language%29

Title is: Python (programming language)

import scrapy

class ArticleSpider(scrapy.Spider):
    name='article'
    
    def start_requests(self):
        urls = [
            'http://en.wikipedia.org/wiki/Python_%28programming_language%29']
        #,   'https://en.wikipedia.org/wiki/Functional_programming',
        #    'https://en.wikipedia.org/wiki/Monty_Python']
        
        return [scrapy.Request(url=url, callback=self.parse)
            for url in urls]

def parse(self, response):
    url = response.url
    title = response.css('h1::text').extract_first()
    print('URL is: {}'.format(url))
    print('Title is: {}'.format(title))

Here is the output

(venv) C:\PythonProjects\il_health_facilities\wikiSpider\wikiSpider\spiders>scrapy runspider articles.py
2021-02-08 23:09:03 [scrapy.utils.log] INFO: Scrapy 2.4.1 started (bot: wikiSpider)
2021-02-08 23:09:03 [scrapy.utils.log] INFO: Versions: lxml 4.6.2.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.9.0 (tags/v3.9.0:9cf6752, Oct  5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)], pyOpenSSL 20.0.1 (OpenSSL 1.1.1i  8 Dec 2020), cryptography 3.4.2, Platform Windows-10-10.0.19041-SP0
2021-02-08 23:09:03 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2021-02-08 23:09:03 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'wikiSpider',
 'NEWSPIDER_MODULE': 'wikiSpider.spiders',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_LOADER_WARN_ONLY': True,
 'SPIDER_MODULES': ['wikiSpider.spiders']}
2021-02-08 23:09:03 [scrapy.extensions.telnet] INFO: Telnet Password: 9f145e989cdeb188
2021-02-08 23:09:03 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2021-02-08 23:09:03 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-02-08 23:09:03 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-02-08 23:09:03 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2021-02-08 23:09:03 [scrapy.core.engine] INFO: Spider opened
2021-02-08 23:09:03 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-02-08 23:09:03 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2021-02-08 23:09:03 [scrapy.core.engine] INFO: Closing spider (finished)
2021-02-08 23:09:03 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.007,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2021, 2, 9, 7, 9, 3, 568775),
 'log_count/INFO': 10,
 'start_time': datetime.datetime(2021, 2, 9, 7, 9, 3, 561775)}
2021-02-08 23:09:03 [scrapy.core.engine] INFO: Spider closed (finished)

(venv) C:\PythonProjects\il_health_facilities\wikiSpider\wikiSpider\spiders>

For debugging purpose I ran shell "http://en.wikipedia.org/wiki/Python_%28programming_language%29" Following is the output:

2021-02-08 23:13:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/Python_%28programming_language%29> (referer: None)
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x000002C898065070>
[s]   item       {}
[s]   request    <GET http://en.wikipedia.org/wiki/Python_%28programming_language%29>
[s]   response   <200 https://en.wikipedia.org/wiki/Python_%28programming_language%29>
[s]   settings   <scrapy.settings.Settings object at 0x000002C898061970>
[s]   spider     <DefaultSpider 'default' at 0x2c8983bce50>
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser
>>> response.css('h1::text').extract_first()
'Python (programming language)'
>>> response.url
'https://en.wikipedia.org/wiki/Python_%28programming_language%29'
>>>

Thank you

Just, indent correct the parse method and it should work.

Also, you can get rid of the start_requests method:

import scrapy


class ArticleSpider(scrapy.Spider):
    name = 'article'
    start_urls = [
        'http://en.wikipedia.org/wiki/Python_%28programming_language%29',
        another_url_1,
        another_url_2,
    ]

    def parse(self, response):
        title = response.css('h1::text').get()
        print(f'URL is: {response.url}')
        print(f'Title is: {title}')

Like @joao wrote in the comment your parse method is not defined as a method but as a function outside of ArticleSpider. I put it inside and it works for me. PS. If you're just using the default "parse" name for the method you dont have to specify that that's callback.

Output

2021-02-09 10:29:13 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://en.wikipedia.org/wiki/Python_%28programming_language%29> from <GET http://en.wikipedia.org/wiki/Python_%28programming_language%29>
2021-02-09 10:29:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/Python_%28programming_language%29> (referer: None)
URL is: https://en.wikipedia.org/wiki/Python_%28programming_language%29
Title is: Python (programming language)
2021-02-09 10:29:13 [scrapy.core.engine] INFO: Closing spider (finished)
2021-02-09 10:29:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

Code

import scrapy

class ArticleSpider(scrapy.Spider):
    name='article'
    
    def start_requests(self):
        urls = [
            'http://en.wikipedia.org/wiki/Python_%28programming_language%29']
        #,   'https://en.wikipedia.org/wiki/Functional_programming',
        #    'https://en.wikipedia.org/wiki/Monty_Python']
        
        return [scrapy.Request(url=url)
            for url in urls]


    def parse(self, response):
        url = response.url
        title = response.css('h1::text').extract_first()
        print('URL is: {}'.format(url))
        print('Title is: {}'.format(title))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM