[英]Beautiful Soup Error 'NoneType' object has no attribute 'text' retrieving stock data
My goal is to sequentially fetch urls from an sqlite3 database and download quotes from a website.我的目标是从 sqlite3 数据库中依次获取 url 并从网站下载报价。 the unload routine for a single title, without iterative process works:
单个标题的卸载例程,无需迭代过程:
import requests
from bs4 import BeautifulSoup as bs
input_str = input("Insert stock's url:")
if input_str=="":
input_str="https://www.teleborsa.it/indici-italia/ftse-mib"
res = requests.get(input_str)
soup = bs(res.content,'lxml')
price = soup.find("span", class_="h-price fc0").text
print("Stock price ",input_str," è ",price)
Problems exist when I use an sqlite database that contains various urls.当我使用包含各种 url 的 sqlite 数据库时存在问题。 The sql querty is correct and the table is read but the following code gives an error:
sql查询是正确的,表被读取但是下面的代码报错:
# reading records
for row in rows:
input_str=row[6]
res = request.get(input_str)
soup = bs(res.content,'html.parser')
price = soup.find("span", class_="h-price fc0").text
curdata =soup.find("div", class_="header-bottom fc3").text
I just did a test like below and everything was ok:我刚刚做了一个像下面这样的测试,一切正常:
rows = list()
[rows.append("https://www.teleborsa.it/indici-italia/ftse-mib")
for _ in range(5)]
for row in rows:
input_str = row
res = requests.get(input_str)
soup = bs(res.content, 'html.parser')
price = soup.find("span", class_="h-price fc0").text
curdata = soup.find("div", class_="header-bottom fc3").text
print(price, "\t\t", curdata)
first you missed an "s" using requests.首先,您使用请求错过了“s”。 and the problem is probably the part with the row[6].
问题可能出在行 [6] 的部分。 make sure you are using correct URLs when retrieving them from database.
从数据库中检索它们时,请确保使用正确的 URL。 if you want more help give us some of the database outputs pls.
如果您需要更多帮助,请给我们一些数据库输出。
here is the complete code.这是完整的代码。 For the only one record, code works.
对于唯一的一条记录,代码有效。 I must think to some incorrect urls.
我必须考虑一些不正确的网址。
import sqlite3
from openpyxl import Workbook
import datetime
import requests
from bs4 import BeautifulSoup as bs
# connessione
con = sqlite3.connect("/demo.sqlite")
cursor = con.cursor()
cursor.execute("select * from tbLista")
rows = cursor.fetchall()
# preparo il foglio
wb = Workbook()
ws = wb.active
ws.delete_cols(1,2)
ws['A1'] = "Isin"
ws['B1'] = "Price"
riga=1
# lettura records
for row in rows:
riga +=1
ws['A'+str(riga)] =row[4]
ws['B'+str(riga)] =row[5]
input_str=row[5]
res = requests.get(input_str)
soup = bs(res.content,'lxml')
price = soup.find("span", class_="h-price fc0").text
curdata =soup.find("div", class_="header-bottom fc3").text
firstmod=price.replace('.','')
secondmod=float(firstmod.replace(',','.'))
ws['B'+str(riga)] = secondmod
# chiusura
cursor.close()
wb.save("demo.xlsx")
print("ok")
I don't write code about error management in soup我不写关于汤中错误管理的代码
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.