简体   繁体   English

如何修复CSV / JSON的Scrapy词典输出格式

[英]How to fix my Scrapy dictionary output format for CSV /JSON

My code is below. 我的代码如下。 I'm looking to extract results to CSV. 我想将结果提取到CSV。 However, the scrapy results in a dictionary with 2 keys, and all the values lumped together in each key. 但是,scrapy导致具有2个键的字典,并且所有值都集中在每个键中。 The output does not look good. 输出效果不佳。 How do I fix this. 我该如何解决。 Can this be done through pipelines/itemloaders etc... 可以通过管道/项目加载器等完成此操作吗?

Thanks very much. 非常感谢。

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join
from gumtree1.items import GumtreeItems

class AdItemLoader(ItemLoader):
    jobs_in = MapCompose(unicode.strip)

class GumtreeEasySpider(CrawlSpider):
    name = 'gumtree_easy'
    allowed_domains = ['gumtree.com.au']
    start_urls = ['http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering']

    rules = (
        Rule(LinkExtractor(restrict_xpaths='//a[@class="rs-paginator-btn next"]'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        loader = AdItemLoader(item=GumtreeItems(), response=response)
        loader.add_xpath('jobs','//div[@id="recent-sr-title"]/following-sibling::*//*[@itemprop="name"]/text()')
        loader.add_xpath('location', '//div[@id="recent-sr-title"]/following-sibling::*//*[@class="rs-ad-location-area"]/text()')
        yield loader.load_item() 

The result: 结果:

2016-03-16 01:51:32 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-5/c9302?ad=offering>
{'jobs': [u'Technical Account Manager',
          u'Service & Maintenance Advisor',
          u'we are hiring motorbike driver delivery leaflet.Strat NOW(BE...',
          u'Casual Gardner/landscape maintenance labourer',
          u'Seeking for Experienced Builders Cleaners with white card',
          u'Babysitter / home help for approx 2 weeks',
          u'Toothing brickwork | Dapto',
          u'EXPERIENCED CHEF',
          u'ChildCare Trainee Wanted',
          u'Skilled Pipelayers & Drainer- Sydney Region',
          u'Casual staff required for Royal Easter Show',
          u'Fencing contractor',
          u'Excavator & Loader Operator',
          u'***EXPERIENCED STRAWBERRY AND RASPBERRY PICKERS WANTED***',
          u'Kitchenhand required for Indian restaurant',
          u'Taxi Driver Wanted',
          u'Full time nanny/sitter',
          u'Kitchen hand and meal packing',
          u'Depot Assistant Required',
          u'hairdresser Junior apprentice required for salon in Randwick',
          u'Insulation Installers Required',
          u'The Knox is seeking a new apprentice',
          u'Medical Receptionist Needed in Bankstown Area - Night Shifts',
          u'On Call Easy Work, Do you live in Berala, Lidcombe or Auburn...',
          u'Looking for farm jon'],
 'location': [u'Melbourne City',
              u'Eastern Suburbs',
              u'Rockdale Area',
              u'Logan Area',
              u'Greater Dandenong',
              u'Brisbane North East',
              u'Kiama Area',
              u'Byron Area',
              u'Dardanup Area',
              u'Blacktown Area',
              u'Auburn Area',
              u'Kingston Area',
              u'Inner Sydney',
              u'Northern Midlands',
              u'Inner Sydney',
              u'Hume Area',
              u'Maribyrnong Area',
              u'Perth City',
              u'Brisbane South East',
              u'Eastern Suburbs',
              u'Gold Coast South',
              u'North Canberra',
              u'Bankstown Area',
              u'Auburn Area',
              u'Gingin Area']}

Should it be like this instead. 应该是这样吗? Jobs and Location as individual Dicts? 工作和位置作为个人词典? This writes correctly to CSV with Jobs and Location in separate cells but I find that using for loops and zip shoult not the best way. 这可以在单独的单元格中将Jobs和Location正确写入CSV,但是我发现使用for循环和zip压缩不是最好的方法。

import scrapy
from gumtree1.items import GumtreeItems

class AussieGum1Spider(scrapy.Spider):
    name = "aussie_gum1"
    allowed_domains = ["gumtree.com.au"]
    start_urls = (
        'http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering',
    )

    def parse(self, response):
        item = GumtreeItems()
        jobs = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*//*[@itemprop="name"]/text()').extract()
        location = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*//*[@class="rs-ad-location-area"]/text()').extract()
        for j, l in zip(jobs, location):
            item['jobs'] = j.strip()
            item['location'] = l
            yield item

partial results below. 部分结果如下。

2016-03-16 02:20:46 [scrapy] DEBUG: Crawled (200) <GET http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering> (referer: http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering)
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Live In Au pair-Urgent', 'location': u'Wanneroo Area'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'live in carer', 'location': u'Fraser Coast'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Mental Health Nurse', 'location': u'Perth Region'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Experienced NBN pit and pipe installers/node and cabinet wor...',
 'location': u'Marrickville Area'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Delivery Driver / Pizza Maker Job - Dominos Pizza',
 'location': u'Hurstville Area'}

Thanks very much. 非常感谢。

To be honest, using a for loop is the right way, but you can workaround it on a pipeline: 老实说,使用for循环是正确的方法,但是您可以在管道上解决该问题:

from scrapy.http import Response
from gumtree1.items import GumtreeItems, CustomItem
from scrapy.exceptions import DropItem


class CustomPipeline(object):

    def __init__(self, crawler):
        self.crawler = crawler

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_item(self, item, spider):
        if isinstance(item, GumtreeItems):
            for i, jobs in enumerate(item['jobs']):
                self.crawler.engine.scraper._process_spidermw_output(
                    CustomItem(jobs=jobs, location=item['location'][i]), None, Response(''), spider)
            raise DropItem("main item dropped")
        return item

also add the custom item: 还添加自定义项:

class CustomItem(scrapy.Item):
    jobs = scrapy.Field()
    location = scrapy.Field()

Hope this helped, again I think you should use the loop. 希望这对您有所帮助,我想您也应该使用循环。

Have a parent selector for every item and extract job and location relative to it: 为每个项目都有一个父选择器,并提取相对于它的joblocation

rows = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*')
for row in rows:
    item = GumtreeItems()
    item['jobs'] = row.xpath('.//*[@itemprop="name"]/text()').extract_first().strip()
    item['location'] = row.xpath('.//*[@class="rs-ad-location-area"]/text()').extract_first().strip()
    yield item

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM