简体   繁体   English

请告诉我 scrapy 启动代码有什么问题

[英]Please tell me what's wrong with the scrapy splash code

I tried to scrapy the content(#recent_list_box > li) data of Samsung Newsroom Mexico.我试图 scrapy 三星新闻室墨西哥的内容(#recent_list_box > li)数据。 But it doesn't work, can you tell me why?但它不起作用,你能告诉我为什么吗?

https://news.samsung.com/mx https://news.samsung.com/mx

think I brought the content with javascript, but I can't read i想我带来了 javascript 的内容,但我看不懂

version: scrapy: 2.1.0 splash: 3.4.1版本:scrapy:2.1.0 飞溅:3.4.1

spider code蜘蛛代码

import scrapy
from scrapy_splash import SplashRequest
from scrapy import Request


class CrawlspiderSpider(scrapy.Spider):
    name = 'crawlspider'
    allowed_domains = ['news.samsung.com/mx']
    page = 1
    start_urls = ['https://news.samsung.com/mx']

    def start_request(self):
        for url in self.start_urls:
            yield SplashRequest(
                         url,
                         self.main_parse,
                         endpoint='render.html',
                         args = {'wait': 10}
                     )

    def parse(self, response):
        lists = response.css('#recent_list_box > li').getAll()
        for list in lists:
            yield {"list" :lists.get() }

We've included the middleware involved.我们已经包括了所涉及的中间件。 setting code设置代码

BOT_NAME = 'spider'
SPIDER_MODULES = ['spider.spiders']
NEWSPIDER_MODULE = 'spider.spiders'
LOG_FILE = 'log.txt'
AJAXCRAWL_ENABLED = True
ROBOTSTXT_OBEY = False
SPLASH_URL = 'http://127.0.0.1'
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
SPLASH_LOG_400 = True

Below are the remaining logs in the log file.以下是日志文件中的剩余日志。 I would appreciate it if you could tell me why the log below is left and why I can't read the data I want如果您能告诉我为什么留下下面的日志以及为什么我无法读取我想要的数据,我将不胜感激

2020-07-02 15:27:09 [scrapy.core.engine] INFO: Spider opened
2020-07-02 15:27:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-02 15:27:09 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2020-07-02 15:27:09 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://news.samsung.com/mx/> from <GET https://news.samsung.com/mx>
2020-07-02 15:27:09 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://news.samsung.com/mx/> (referer: None)
2020-07-02 15:27:09 [scrapy.core.scraper] ERROR: Spider error processing <GET https://news.samsung.com/mx/> (referer: None)
Traceback (most recent call last):
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\defer.py", line 117, in iter_errback
    yield next(it)
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\python.py", line 345, in __next__
    return next(self.data)
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\python.py", line 345, in __next__
    return next(self.data)
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy_splash\middleware.py", line 156, in process_spider_output
    for el in result:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 338, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\scrapy_tutorial\spider\spider\spiders\crawlspider.py", line 22, in parse
    lists = response.css('#recent_list_box > li').getAll()
AttributeError: 'SelectorList' object has no attribute 'getAll'
2020-07-02 15:27:09 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-02 15:27:09 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

You have to change你必须改变

lists = response.css('#recent_list_box > li').getAll()

to

lists = response.css('#recent_list_box > li').getall()

lower letter 'a'小写字母“a”

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM