繁体   English   中英

如何将迭代的输出变量捕获到列表中进行分析

[英]How to capture iterated output variable into list for analysis

我正在尝试从许多网页中解析html文本以进行情感分析。 在社区的帮助下,我能够遍历许多URL,并根据textblob库的情感分析得出情感得分,并成功使用了print函数为每个URL输出得分。 但是,我无法实现,将返回变量产生的许多输出放入列表中,因此我可以通过使用存储的数字计算平均值并随后在图形中显示结果来进一步继续分析。

具有打印功能的代码:

import requests
import json
import urllib
from bs4 import BeautifulSoup
from textblob import TextBlob



#you can add to this
urls = ["http://www.thestar.com/business/economy/2015/05/19/canadian-consumer-confidence-dips-but-continues-to-climb-in-us-report.html",
        "http://globalnews.ca/news/2012054/canada-ripe-for-an-invasion-of-u-s-dollar-stores-experts-say/",
        "http://www.cp24.com/news/tsx-flat-in-advance-of-fed-minutes-loonie-oil-prices-stabilize-1.2381931",
        "http://www.marketpulse.com/20150522/us-and-canadian-gdp-to-close-out-week-in-fx/",
        "http://www.theglobeandmail.com/report-on-business/canada-pension-plan-fund-sees-best-ever-annual-return/article24546796/",
        "http://www.marketpulse.com/20150522/canadas-april-inflation-slowest-in-two-years/"]


def parse_websites(list_of_urls):
    for url in list_of_urls:
        html = urllib.urlopen(url).read()
        soup = BeautifulSoup(html)
        # kill all script and style elements

        for script in soup(["script", "style"]):
            script.extract()    # rip it out

        # get text
        text = soup.get_text()

        # break into lines and remove leading and trailing space on each
        lines = (line.strip() for line in text.splitlines())
        # break multi-headlines into a line each
        chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
        # drop blank lines
        text = '\n'.join(chunk for chunk in chunks if chunk)

        #print(text)

        wiki = TextBlob(text)
        r = wiki.sentiment.polarity

        print r




parse_websites(urls)

输出:

>>> 
0.10863027172
0.156074203574
0.0766585497835
0.0315555555556
0.0752548359411
0.0902824858757
>>> 

但是当我使用return变量形成一个列表来使用这些值来工作时,我没有任何结果,代码:

import requests
import json
import urllib
from bs4 import BeautifulSoup
from textblob import TextBlob



#you can add to this
urls = ["http://www.thestar.com/business/economy/2015/05/19/canadian-consumer-confidence-dips-but-continues-to-climb-in-us-report.html",
        "http://globalnews.ca/news/2012054/canada-ripe-for-an-invasion-of-u-s-dollar-stores-experts-say/",
        "http://www.cp24.com/news/tsx-flat-in-advance-of-fed-minutes-loonie-oil-prices-stabilize-1.2381931",
        "http://www.marketpulse.com/20150522/us-and-canadian-gdp-to-close-out-week-in-fx/",
        "http://www.theglobeandmail.com/report-on-business/canada-pension-plan-fund-sees-best-ever-annual-return/article24546796/",
        "http://www.marketpulse.com/20150522/canadas-april-inflation-slowest-in-two-years/"]


def parse_websites(list_of_urls):
    for url in list_of_urls:
        html = urllib.urlopen(url).read()
        soup = BeautifulSoup(html)
        # kill all script and style elements

        for script in soup(["script", "style"]):
            script.extract()    # rip it out

        # get text
        text = soup.get_text()

        # break into lines and remove leading and trailing space on each
        lines = (line.strip() for line in text.splitlines())
        # break multi-headlines into a line each
        chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
        # drop blank lines
        text = '\n'.join(chunk for chunk in chunks if chunk)

        #print(text)

        wiki = TextBlob(text)
        r = wiki.sentiment.polarity
        r = []
        return [r]




parse_websites(urls)

输出:

Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> ================================ RESTART ================================
>>> 
>>> 

我怎样才能做到,这样我就可以使用数字,并能够像这样[r1,r2,r3 ...]从列表中进行加,减。

先感谢您。

在下面的代码中,您要python返回一个空列表:

r = wiki.sentiment.polarity

r = []     #creat empty list r
return [r] #return empty list

如果我正确理解了您的问题,那么您要做的就是:

my_list = [] #create empty list

   for url in list_of_urls:
    html = urllib.urlopen(url).read()
    soup = BeautifulSoup(html)

    for script in soup(["script", "style"]):
        script.extract()    # rip it out

    text = soup.get_text()

    lines = (line.strip() for line in text.splitlines())
    chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
    text = '\n'.join(chunk for chunk in chunks if chunk)

    wiki = TextBlob(text)
    r = wiki.sentiment.polarity

    my_list.append(r) #add r to list my_list

print my_list

[r1,r2,r3,...]

或者,您可以创建一个以url为键的字典

my_dictionary = {}

        r = wiki.sentiment.polarity
        my_dictionary[url] = r

print my_dictionary

{'url1':r1,'url2:r2等)

print my_dictionary['url1']

R1

字典可能对您更有意义,因为使用URL作为键将更容易检索,编辑和删除“ r”。

我对Python有点陌生,所以如果这没有道理,希望其他人可以纠正我...

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM