[英]scrapy from script output in json
我在python腳本中運行scrapy
def setup_crawler(domain):
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = ArgosSpider(domain=domain)
settings = get_project_settings()
crawler = Crawler(settings)
crawler.configure()
crawler.crawl(spider)
crawler.start()
reactor.run()
它成功運行並停止但結果在哪里? 我希望結果采用json格式,我該怎么做?
result = responseInJSON
就像我們使用命令一樣
scrapy crawl argos -o result.json -t json
您需要手動設置FEED_FORMAT
和FEED_URI
設置:
settings.overrides['FEED_FORMAT'] = 'json'
settings.overrides['FEED_URI'] = 'result.json'
如果要將結果輸入變量,可以定義一個將項目收集到列表中的Pipeline
類。 使用spider_closed
信號處理程序查看結果:
import json
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from scrapy.utils.project import get_project_settings
class MyPipeline(object):
def process_item(self, item, spider):
results.append(dict(item))
results = []
def spider_closed(spider):
print results
# set up spider
spider = TestSpider(domain='mydomain.org')
# set up settings
settings = get_project_settings()
settings.overrides['ITEM_PIPELINES'] = {'__main__.MyPipeline': 1}
# set up crawler
crawler = Crawler(settings)
crawler.signals.connect(spider_closed, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
# start crawling
crawler.start()
log.start()
reactor.run()
僅供參考,看看Scrapy如何解析命令行參數 。
另請參閱: 在Python中的同一進程中捕獲stdout 。
我設法通過將FEED_FORMAT
和FEED_URI
添加到CrawlerProcess
構造函數來使其工作,使用基本的Scrapy API教程代碼,如下所示:
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'json',
'FEED_URI': 'result.json'
})
簡單!
from scrapy import cmdline
cmdline.execute("scrapy crawl argos -o result.json -t json".split())
把那個腳本放在你放scrapy.cfg
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.