簡體   English   中英

請告訴我 scrapy 啟動代碼有什么問題

[英]Please tell me what's wrong with the scrapy splash code

我試圖 scrapy 三星新聞室墨西哥的內容(#recent_list_box > li)數據。 但它不起作用,你能告訴我為什么嗎?

https://news.samsung.com/mx

想我帶來了 javascript 的內容,但我看不懂

版本:scrapy:2.1.0 飛濺:3.4.1

蜘蛛代碼

import scrapy
from scrapy_splash import SplashRequest
from scrapy import Request


class CrawlspiderSpider(scrapy.Spider):
    name = 'crawlspider'
    allowed_domains = ['news.samsung.com/mx']
    page = 1
    start_urls = ['https://news.samsung.com/mx']

    def start_request(self):
        for url in self.start_urls:
            yield SplashRequest(
                         url,
                         self.main_parse,
                         endpoint='render.html',
                         args = {'wait': 10}
                     )

    def parse(self, response):
        lists = response.css('#recent_list_box > li').getAll()
        for list in lists:
            yield {"list" :lists.get() }

我們已經包括了所涉及的中間件。 設置代碼

BOT_NAME = 'spider'
SPIDER_MODULES = ['spider.spiders']
NEWSPIDER_MODULE = 'spider.spiders'
LOG_FILE = 'log.txt'
AJAXCRAWL_ENABLED = True
ROBOTSTXT_OBEY = False
SPLASH_URL = 'http://127.0.0.1'
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
SPLASH_LOG_400 = True

以下是日志文件中的剩余日志。 如果您能告訴我為什么留下下面的日志以及為什么我無法讀取我想要的數據,我將不勝感激

2020-07-02 15:27:09 [scrapy.core.engine] INFO: Spider opened
2020-07-02 15:27:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-02 15:27:09 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2020-07-02 15:27:09 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://news.samsung.com/mx/> from <GET https://news.samsung.com/mx>
2020-07-02 15:27:09 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://news.samsung.com/mx/> (referer: None)
2020-07-02 15:27:09 [scrapy.core.scraper] ERROR: Spider error processing <GET https://news.samsung.com/mx/> (referer: None)
Traceback (most recent call last):
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\defer.py", line 117, in iter_errback
    yield next(it)
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\python.py", line 345, in __next__
    return next(self.data)
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\python.py", line 345, in __next__
    return next(self.data)
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy_splash\middleware.py", line 156, in process_spider_output
    for el in result:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 338, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\scrapy_tutorial\spider\spider\spiders\crawlspider.py", line 22, in parse
    lists = response.css('#recent_list_box > li').getAll()
AttributeError: 'SelectorList' object has no attribute 'getAll'
2020-07-02 15:27:09 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-02 15:27:09 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

你必須改變

lists = response.css('#recent_list_box > li').getAll()

lists = response.css('#recent_list_box > li').getall()

小寫字母“a”

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM