简体   繁体   English

Scrapy CrawlerProcess不使用CrawlSpider保存数据

[英]Scrapy CrawlerProcess Not Saving Data with CrawlSpider

The following code will execute, create the file with no errors. 将执行以下代码,创建没有错误的文件。 However, it is not saving to the json file. 但是,它不保存到json文件。

I turned off autothrottle, which in the past has interfered with downloading data, but it didn't fix the issue. 我关闭了autothrottle,它过去曾干扰下载数据,但它没有解决问题。

Scrapy==1.4.0 Scrapy == 1.4.0

class MySpider(CrawlSpider):
    name = "spidy"
    allowed_domains = ["cnn.com"]
    start_urls = ["http://www.cnn.com"]    

    rules = [Rule(LinkExtractor(allow=['cnn.com/.+']), callback='parse_item', follow=True)]    

    def parse_item(self, response):

        print('went to: {}'.format(response.url))

        yield {'url': response.url}         

FILE_NAME = 'my_data.json'
SETTINGS = {
            'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
            'FEED_FORMAT': 'json',
            'FEED_URI': FILE_NAME,          
            } 

process = CrawlerProcess(SETTINGS)
process.crawl(MySpider)
process.start() 

EDIT: 编辑:

The scraper is getting the data as seen in the log: 刮刀正在获取日志中显示的数据:

2017-11-21 11:07:55 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
2017-11-21 11:07:55 [scrapy.utils.log] INFO: Overridden settings: {'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'FEED_URI': 'my_data.json', 'FEED_FORMAT': 'json'}
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.feedexport.FeedExporter']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-11-21 11:07:55 [scrapy.core.engine] INFO: Spider opened
2017-11-21 11:07:55 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-11-21 11:07:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6041
2017-11-21 11:07:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com> (referer: None)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/us> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/congress-capitol-hill> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/president-donald-trump-45> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/us-security> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/trumpmerica> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/state-cnn-politics-magazine> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/opinion/opinion-social-issues> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/opinions/cnnireport> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/vr/vr-archives> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/middle-east> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://imagesource.cnn.com> from <GET http://www.cnn.com/collection>
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/supreme-court-nine> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://transcripts.cnn.com/TRANSCRIPTS/> from <GET http://www.cnn.com/transcripts>
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/pf/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/luxury/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/data/markets/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/technology/> (referer: http://www.cnn.com)
went to: http://www.cnn.com/us
2017-11-21 11:07:56 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.cnn.com/us>
{'url': 'http://www.cnn.com/us'}
2017-11-21 11:07:56 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET http://www.cnn.com/us> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://www.cnn.com/email/subscription> from <GET http://www.cnn.com/newsletters>
... 

We can see the scraper is visiting the URLs, crawling additional urls on the page, getting the response URL (as see with "went to: ") then returning the data with "{'url':}, eg {'url': ' http://www.cnn.com/us '} 我们可以看到刮刀正在访问网址,抓取页面上的其他网址,获取响应网址(如“转到:”所示),然后使用“{”url':}返回数据,例如{'url': ' http://www.cnn.com/us '}

So your code as such works fine, but I assume you stop it twice or kill it which makes the json blank. 所以你的代码工作正常,但我假设你停止它两次或杀死它使json空白。 I would change two things. 我会改变两件事。

One use jsonlines instead of json . 一个使用jsonlines而不是json This would make sure that even if I kill the spider I won't loose too many items. 这样可以确保即使我杀了蜘蛛也不会丢失太多物品。 Then each line itself is a valid JSON, so I can append to same file. 然后每一行本身都是一个有效的JSON,所以我可以附加到同一个文件。 Also if you break the program in between you will get a invalid JSON 此外,如果您在两者之间中断程序,您将获得无效的JSON

Second, I would set the concurrent items to a lower value so items are exported more often (Default value is 100) 其次,我会将并发项设置为较低的值,以便更频繁地导出项目(默认值为100)

from scrapy.crawler import CrawlerProcess
from scrapy.spiders import CrawlSpider, Rule

from scrapy.linkextractor import LinkExtractor

class MySpider(CrawlSpider):
    name = "spidy"
    allowed_domains = ["cnn.com"]
    start_urls = ["http://www.cnn.com"]

    rules = [Rule(LinkExtractor(allow=['cnn.com/.+']), callback='parse_item', follow=True)]

    def parse_item(self, response):

        print('went to: {}'.format(response.url))

        yield {'url': response.url}

FILE_NAME = 'my_data.jsonl'
SETTINGS = {
            'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
            'FEED_FORMAT': 'jsonlines',
            'FEED_URI': FILE_NAME,
            'CONCURRENT_ITEMS': 1
            }

process = CrawlerProcess(SETTINGS)
process.crawl(MySpider)
process.start()

After that you will find the data does get exported fine 之后,您会发现数据输出正常

出口的物品

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM