簡體   English   中英

如何從python腳本結果中刪除u''?

[英]How to remove u'' from python script result?

我正在嘗試使用python / scrapy編寫解析腳本。 如何從結果文件中的字符串中刪除[]和u'?

現在我有這樣的文字:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.utils.markup import remove_tags
from googleparser.items import GoogleparserItem
import sys

class GoogleparserSpider(BaseSpider):
    name = "google.com"
    allowed_domains = ["google.com"]
    start_urls = [
        "http://www.google.com/search?q=this+is+first+test&num=20&hl=uk&start=0",
    "http://www.google.com/search?q=this+is+second+test&num=20&hl=uk&start=0"
    ]

    def parse(self, response):
       print "===START======================================================="
       hxs = HtmlXPathSelector(response)
       qqq = hxs.select('/html/head/title/text()').extract()
       print qqq
       print "---DATA--------------------------------------------------------"

       sites = hxs.select('/html/body/div[5]/div[3]/div/div/div/ol/li/h3')
       i = 1
       items = []
       for site in sites:
           try:
           item = GoogleparserItem()
           title1 = site.select('a').extract()
           title2=str(title1)
           title=remove_tags(title2)
           link=site.select('a/@href').extract()
               item['num'] = i  
           item['title'] = title
               item['link'] = link
               i= i+1
               items.append(item)
           except: 
               print 'EXCEPTION'
       return items
       print "===END========================================================="

SPIDER = GoogleparserSpider()

跑完后我有這樣的結果

python scrapy-ctl.py crawl google.com

2010-07-25 17:44:44+0300 [-] Log opened.
2010-07-25 17:44:44+0300 [googleparser] DEBUG: Enabled extensions: CoreStats, CloseSpider, WebService, TelnetConsole, MemoryUsage
2010-07-25 17:44:44+0300 [googleparser] DEBUG: Enabled scheduler middlewares: DuplicatesFilterMiddleware
2010-07-25 17:44:44+0300 [googleparser] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloaderStats, UserAgentMiddleware, RedirectMiddleware, DefaultHeadersMiddleware, CookiesMiddleware, HttpCompressionMiddleware, RetryMiddleware
2010-07-25 17:44:44+0300 [googleparser] DEBUG: Enabled spider middlewares: UrlLengthMiddleware, HttpErrorMiddleware, RefererMiddleware, OffsiteMiddleware, DepthMiddleware
2010-07-25 17:44:44+0300 [googleparser] DEBUG: Enabled item pipelines: CsvWriterPipeline
2010-07-25 17:44:44+0300 [-] scrapy.webservice.WebService starting on 6080
2010-07-25 17:44:44+0300 [-] scrapy.telnet.TelnetConsole starting on 6023
2010-07-25 17:44:44+0300 [google.com] INFO: Spider opened
2010-07-25 17:44:45+0300 [google.com] DEBUG: Crawled (200) <GET http://www.google.com/search?q=this+is+first+test&num=20&hl=uk&start=0> (referer: None)
===START=======================================================
[u'this is first test - \u041f\u043e\u0448\u0443\u043a Google']
---DATA--------------------------------------------------------
2010-07-25 17:52:42+0300 [google.com] DEBUG: Scraped GoogleparserItem(num=1, link=[u'http://www.amazon.com/First-Protector-Small-Tamora-Pierce/dp/0679889175'], title=u"[u'Amazon.com: First Test (Protector of the Small) (9780679889175 ...']") in <http://www.google.com/search?q=this+is+first+test&num=100&hl=uk&start=0>

和文本中的文字:

1,[u'Amazon.com: First Test (Protector of the Small) (9780679889175 ...'],[u'http://www.amazon.com/First-Protector-Small-Tamora-Pierce/dp/0679889175']

更漂亮 - print qqq.pop()

print qqq[0]替換print qqq 你得到那個結果,因為qqq是一個列表。

與您的文本文件相同的問題。 您有一個列表,其中包含您正在編寫的一個元素,而不是列表中的元素。

看起來extract的結果是一個list 嘗試:

print ', '.join(qqq)

代碼前面的u,純粹意味着它是一個unicode字符串。 請參閱此處的參考。 http://docs.python.org/tutorial/introduction.html#unicode-strings 修復方法是使用str()方法將內容轉換為字符串。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM