簡體   English   中英

Scrapy:TypeError:字符串索引必須是整數,而不是str?

[英]Scrapy: TypeError: string indices must be integers, not str?

我寫了一個蜘蛛,從新聞網站抓取數據:

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

from items import CravlingItem

import re


class CountrySpider(CrawlSpider):
    name = 'Post_and_Parcel_Human_Resource'

    allowed_domains = ['postandparcel.info']
    start_urls = ['http://postandparcel.info/category/news/human-resources/']

    rules = (
        Rule(LinkExtractor(allow='',
                           restrict_xpaths=(
                               '//*[@id="page"]/div[4]/div[1]/div[1]/div[1]/h1/a',
                               '//*[@id="page"]/div[4]/div[1]/div[1]/div[2]/h1/a',
                               '//*[@id="page"]/div[4]/div[1]/div[1]/div[3]/h1/a'
                           )),
             callback='parse_item',
             follow=False),
    )

    def parse_item(self, response):
        i = CravlingItem()
        i['title'] = " ".join(response.xpath('//div[@class="cd_left_big"]/div/h1/text()')
                              .extract()).strip() or " "
        i['headline'] = self.clear_html(
            " ".join(response.xpath('//div[@class="cd_left_big"]/div//div/div[1]/p')
                                 .extract()).strip()) or " "
        i['text'] = self.clear_html(
            " ".join(response.xpath('//div[@class="cd_left_big"]/div//div/p').extract()).strip()) or " "
        i['url'] = response.url
        i['image'] = (" ".join(response.xpath('//*[@id="middle_column_container"]/div[2]/div/img/@src')
                              .extract()).strip()).replace('wp-content/', 'http://postandparcel.info/wp-content/') or " "
        i['author'] = " "
        # print("\n")
        # print(i)
        return i

    @staticmethod
    def clear_html(html):
        text = re.sub(r'<(style).*?</\1>(?s)|<[^>]*?>|\n|\t|\r', '', html)
        return text

並且我還在管道中編寫了一段代碼以優化提取的文本:這是管道:

from scrapy.conf import settings
from scrapy import log
import pymongo
import json
import codecs
import re
class RefineDataPipeline(object):
    def process_item(self, item, spider):
        #In this section: the below edits will be applied to all scrapy crawlers.
    item['text'] =str( item['text'].encode("utf-8"))
    replacements ={"U.S.":" US ", " M ":"Million", "same as the title":"", " MMH Editorial ":"", " UPS ":"United Parcel Service", " UK ":" United Kingdom "," Penn ":" Pennsylvania ", " CIPS ":" Chartered Institute of Procurement and Supply ", " t ":" tonnes ", " Uti ":" UTI ", "EMEA":" Europe, Middle East and Africa ", " APEC ":" Asia-Pacific Economic Cooperation ", " m ":" million ", " Q4 ":" 4th quarter ", "LLC":"", "Ltd":"", "Inc":"", "Published text":" Original text "}


    allparen= re.findall('\(.+?\)',item['text'])
    for item in allparen:
        if item[1].isupper() and item[2].isupper():
            replacements[str(item)]=''
        elif item[1].islower() or item[2].islower():
            replacements[str(item)]=item[1:len(item)-1]
        else:
            try:
                val = int(item[1:len(item)-1])
                replacements[str(item)]= str(val)
            except ValueError:
                pass
    def multireplace(s, replacements):
        substrs = sorted(replacements, key=len, reverse=True)
        regexp = re.compile('|'.join(map(re.escape, substrs)))
            return regexp.sub(lambda match: replacements[match.group(0)],s)
    item['text'] = multireplace(item['text'], replacements)
    item['text'] = re.sub( '\s+', ' ', item['text'] ).strip()
    return item

但是存在一個巨大的問題,導致蜘蛛無法成功抓取數據:

追溯(最近一次調用):文件“ /usr/lib/python2.7/dist-packages/twisted/internet/defer.py”,第588行,位於_runCallbacks current.result = callback(current.result,* args, ** kw)文件“ / home / hathout / Desktop / updataed portcalls / thomas / thomas / pipelines.py”,第41行,位於process_item item ['text'] = multireplace(item ['text'],替換項)TypeError字符串索引必須是整數,而不是str

我真的不知道如何克服“ TypeError:字符串索引必須是整數,而不是str”錯誤。

簡短答案:變量item是字符串

長答案:在本節中

allparen= re.findall('\(.+?\)',item['text'])
for item in allparen:
    ...

您正在遍歷應為字符串列表或空列表的allparen,並使用與循環變量相同的變量名稱item 所以item是一個字符串,而不是dict / Item對象。 為循環變量使用其他名稱,例如:

for paren in allparen:
    if paren[1].isupper() and paren[2].isupper():
    ...

基本上,您在循環中使用相同的變量名稱將覆蓋原始item變量。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM