繁体   English   中英

Python *self.args, **self.kwargs 导致类内部错误

[英]Python *self.args, **self.kwargs are causing error inside class

我一直在尝试制作一个简单的工具来帮助在我的脑海中更流畅地使用scrapy的独立请求,但我遇到了一个错误,我不知道如何解决它。

我知道提出请求的其他方法,但我想尝试其他方法,我知道解决此问题的方法,但需要大量的体力劳动,我更愿意找出错误的原因

代码:

class Request(scrapy.Request):
    def __init__(self, *args, **kwargs):
        process = CrawlerProcess({})
        process.crawl(self._Request, args=args, kwargs=kwargs)
        process.start()
        self.response: scrapy.Request = self._Request.response

    class _Request(scrapy.Spider):
        response = scrapy.Request
        def __init__(self, *args, **kwargs):
            self.args = args
            self.kwargs = kwargs

        def start_requests(self):
            # putting the '*' in the args/kwargs of the print statement causes the same error
            print(self.args, self.kwargs)
            self.response = scrapy.Request(*self.args, **self.kwargs)
            yield self.response


if __name__ == '__main__':
    Request('https://scrapy.org/')

输出:

Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)] on win32
2019-12-26 16:28:24 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: scrapybot)
2019-12-26 16:28:24 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0i  14 Aug 2018), cryptography 2.3.1, Platform Windows-10-10.0.18362-SP0
2019-12-26 16:28:24 [scrapy.crawler] INFO: Overridden settings: {}
2019-12-26 16:28:24 [scrapy.extensions.telnet] INFO: Telnet Password: aaba020161f1f11b
2019-12-26 16:28:24 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2019-12-26 16:28:24 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-12-26 16:28:24 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-12-26 16:28:24 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-12-26 16:28:24 [scrapy.core.engine] INFO: Spider opened
2019-12-26 16:28:24 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-12-26 16:28:24 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
() {'args': ('https://scrapy.org/',), 'kwargs': {}}
2019-12-26 16:28:24 [scrapy.core.engine] ERROR: Error while obtaining start requests
Traceback (most recent call last):
  File "C:\Users\<username>\AppData\Local\Programs\Python\Python37\lib\site-packages\scrapy\core\engine.py", line 127, in _next_request
    request = next(slot.start_requests)
  File "F:/Python/Projects/UserAgentListUpdater/UserAgentListUpdater/spiders/UserAgentListUpdater.py", line 66, in start_requests
    self.response = scrapy.Request(*self.args, **self.kwargs)
TypeError: __init__() got an unexpected keyword argument 'args'
2019-12-26 16:28:24 [scrapy.core.engine] INFO: Closing spider (finished)
2019-12-26 16:28:24 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.009006,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 12, 27, 0, 28, 24, 586562),
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'start_time': datetime.datetime(2019, 12, 27, 0, 28, 24, 577556)}
2019-12-26 16:28:24 [scrapy.core.engine] INFO: Spider closed (finished)

您需要以相同的方式调用crawl

process.crawl(self._Request, *args, **kwargs)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM