簡體   English   中英

Scrapy Spider不保存數據

[英]Scrapy Spider Not Saving Data

我正在嘗試使用Scrapy將籃球隊的日程表保存到CSV文件中。 我在這些文件中編寫了以下代碼:

settings.py

BOT_NAME = 'test_project'

SPIDER_MODULES = ['test_project.spiders']
NEWSPIDER_MODULE = 'test_project.spiders'

FEED_FORMAT = "csv"
FEED_URI = "cportboys.csv"

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'test_project (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

khsaabot.py

import scrapy


class KhsaabotSpider(scrapy.Spider):
    name = 'khsaabot'
    allowed_domains = ['https://scoreboard.12dt.com/scoreboard/khsaa/kybbk17/?id=51978']
    start_urls = ['http://https://scoreboard.12dt.com/scoreboard/khsaa/kybbk17/?id=51978/']

def parse(self, response):
    date = response.css('.mdate::text').extract()
    opponent = response.css('.opponent::text').extract()
    place = response.css('.schedule-loc::text').extract()


    for item in zip(date,opponent,place):
        scraped_info = {
            'date' : item[0],
            'opponent' : item[1],
            'place' : item[2],
        }

        yield scraped_info

現在,我不確定這里出了什么問題,當我在終端中使用“ scrapy crawl khsaabot”運行它時,它沒有出現任何錯誤,並且看起來工作正常。 但是,以防萬一終端中發生了問題,我也提供了到達那里的輸出:

2017-12-27 17:21:49 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: test_project)
2017-12-27 17:21:49 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'test_project', 'FEED_FORMAT': 'csv', 'FEED_URI': 'cportboys.csv', 'NEWSPIDER_MODULE': 'test_project.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['test_project.spiders']}
2017-12-27 17:21:49 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2017-12-27 17:21:49 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-12-27 17:21:49 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-12-27 17:21:49 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-12-27 17:21:49 [scrapy.core.engine] INFO: Spider opened
2017-12-27 17:21:49 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-12-27 17:21:49 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-12-27 17:21:49 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://https/robots.txt> (failed 1 times): DNS lookup failed: no results for hostname lookup: https.
2017-12-27 17:21:49 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://https/robots.txt> (failed 2 times): DNS lookup failed: no results for hostname lookup: https.
2017-12-27 17:21:49 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://https/robots.txt> (failed 3 times): DNS lookup failed: no results for hostname lookup: https.
2017-12-27 17:21:49 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET http://https/robots.txt>: DNS lookup failed: no results for hostname lookup: https.
Traceback (most recent call last):
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/twisted/internet/defer.py", line 1384, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/twisted/python/failure.py", line 408, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/twisted/internet/endpoints.py", line 954, in startConnectionAttempts
    "no results for hostname lookup: {}".format(self._hostStr)
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: https.
2017-12-27 17:21:49 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://https//scoreboard.12dt.com/scoreboard/khsaa/kybbk17/?id=51978/> (failed 1 times): DNS lookup failed: no results for hostname lookup: https.
2017-12-27 17:21:49 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://https//scoreboard.12dt.com/scoreboard/khsaa/kybbk17/?id=51978/> (failed 2 times): DNS lookup failed: no results for hostname lookup: https.
2017-12-27 17:21:49 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://https//scoreboard.12dt.com/scoreboard/khsaa/kybbk17/?id=51978/> (failed 3 times): DNS lookup failed: no results for hostname lookup: https.
2017-12-27 17:21:49 [scrapy.core.scraper] ERROR: Error downloading <GET http://https//scoreboard.12dt.com/scoreboard/khsaa/kybbk17/?id=51978/>
Traceback (most recent call last):
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/twisted/internet/defer.py", line 1384, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/twisted/python/failure.py", line 408, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/Users/devsandbox/anaconda3/lib/python3.6/site-packages/twisted/internet/endpoints.py", line 954, in startConnectionAttempts
    "no results for hostname lookup: {}".format(self._hostStr)
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: https.
2017-12-27 17:21:49 [scrapy.core.engine] INFO: Closing spider (finished)
2017-12-27 17:21:49 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 6,
 'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 6,
 'downloader/request_bytes': 1416,
 'downloader/request_count': 6,
 'downloader/request_method_count/GET': 6,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 12, 27, 23, 21, 49, 579649),
 'log_count/DEBUG': 7,
 'log_count/ERROR': 2,
 'log_count/INFO': 7,
 'memusage/max': 50790400,
 'memusage/startup': 50790400,
 'retry/count': 4,
 'retry/max_reached': 2,
 'retry/reason_count/twisted.internet.error.DNSLookupError': 4,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2017, 12, 27, 23, 21, 49, 323652)}
2017-12-27 17:21:49 [scrapy.core.engine] INFO: Spider closed (finished)

輸出對我來說似乎不錯,但是我還是Scrapy的新手,所以我可能會遺漏一些東西。

謝謝你們

您在日志中收到twisted.internet.error.DNSLookupError消息。 查看您的start_urls列表,該項目以“ http:// https:// ”開頭。 更改:

start_urls = ['http://https://scoreboard.12dt.com/scoreboard/khsaa/kybbk17/?id=51978/']

至:

start_urls = ['https://scoreboard.12dt.com/scoreboard/khsaa/kybbk17/?id=51978/']

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM