Why does the scrapy.Request class call the parse() method by default, and I do not quite understand the process
the part of scrapy.Request source code
class Request(object_ref):
def __init__(self, url, callback=None, method='GET', headers=None, body=None,
cookies=None, meta=None, encoding='utf-8', priority=0,
dont_filter=False, errback=None, flags=None):
self._encoding = encoding # this one has to be set first
self.method = str(method).upper()
self._set_url(url)
self._set_body(body)
assert isinstance(priority, int), "Request priority not an integer: %r" % priority
self.priority = priority
assert callback or not errback, "Cannot use errback without a callback"
self.callback = callback
self.errback = errback
....
but this default callback is None so I am very puzzled by this
if "msg" in text_json and text_json["msg"] == "login":
for url in self.start_urls:
yield scrapy.Request(url, dont_filter=True, headers=self.headers)
This is something that is decided inside the Scrapy core , see this request.callback or spider.parse
part:
def call_spider(self, result, request, spider):
result.request = request
dfd = defer_result(result)
dfd.addCallbacks(request.callback or spider.parse, request.errback)
return dfd.addCallback(iterate_spider_output)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.