![](/img/trans.png)
[英]How to create a csv file dynamically with name of the spider in scrapy python
[英]Scrapy : Create csv file with spider name
我目前正在寻找将报废的数据导出到文件中,这些文件的名称基于蜘蛛名称。
这是我的pipelines.py:
from mydatacrowd.models import Datacrowd
from scrapy.contrib.exporter import CsvItemExporter
class CsvExportPipeline(object):
def _init_(self):
self.files = {}
@classmethod
def from_crawlers(cls, crawler):
pipeline = cls()
crawler.signal.connect(pipeline.spider_opened, signal.spider_opened)
crawler.signal.connect(pipeline.spider_closed, signal.spider_closed)
return pipeline
def spider_opened(self, spider):
print 'Hello world!'
print spider.name
file = open('%s.csv' % spider.name, 'w+b')
self.files[spider] = file
self.exporter = CsvItemExporter(file)
self.exporter.start_exporting()
def spider_closed(self, spider):
self.exporter.finish_exporting()
file = self.files.pop(spider)
file.close()
def process_item(self, item, spider):
item.save()
return item
这是我的settings.py的一部分:
...
ITEM_PIPELINES = {
'datacrowdscrapy.pipelines.CsvExportPipeline': 1000,
}
FEED_FORMAT = 'csv'
FEED_EXPORTERS = {
'csv': 'datacrowdscrapy.feedexport.CsvScrapperExporter'
}
...
这是我的feedexport.py:
from scrapy.conf import settings
from scrapy.contrib.exporter import CsvItemExporter
class CsvScrapperExporter(CsvItemExporter):
def _init_(self, *args, **kwargs):
kwargs['fields_to_export'] = settings.getlist('EXPORT_FIELDS') or None
kwargs['encoding'] = settings.get('EXPORT_ENCODING', 'utf-8')
super(CsvScrapperExporter, self).__init__(*args, **kwargs)
没有创建文件,没有错误显示,并且“ Hello world”从不出现在日志中,我还缺少什么?
谢谢 !
编辑:
我的settings.py中没有FEED_URI参数,这有帮助吗?
查看scrapy爬行命令源,如果您为scrapy提供如下输出选项,则scrapy只会读取FEED_EXPORTERS设置:
scrapy crawl <spider_name> -o csv
从scrapy / commands / crawl.py:
if opts.output:
...
valid_output_formats = self.settings['FEED_EXPORTERS'].keys() +
self.settings['FEED_EXPORTERS_BASE'].keys()
....
self.settings.overrides['FEED_FORMAT'] = opts.output_format
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.