繁体   English   中英

从网站获取实时数据并不断更新数据

[英]Fetching live data from website's with continiously updating data

html = urllib.request.urlopen(req)放入while循环中时,可以轻松获取数据,但是大约需要3秒钟才能获取数据。 所以我想,也许如果我把它放在外面,我可以更快地获取它,因为它不必每次都打开URL,但这会引发AttributeError:'str'对象没有属性'read' 也许它无法识别HTML变量名称。 如何加快处理速度?

def soup():
url = "http://www.investing.com/indices/major-indices"
req = urllib.request.Request(
url, 
data=None, 
headers={
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
    'Connection': 'keep-alive'    }
       )
global Ltp
global html
html = urllib.request.urlopen(req)
while True:
    html = html.read().decode('utf-8')
    bsobj = BeautifulSoup(html, "lxml")   

    Ltp = bsobj.find("td", {"class":"pid-169-last"} )
    Ltp = (Ltp.text)
    Ltp = Ltp.replace(',' , '');
    os.system('cls')     
    Ltp = float(Ltp)
    print (Ltp, datetime.datetime.now())    

soup()

如果您想实时获取,则需要定期调用url

html = urllib.request.urlopen(req)

这应该是一个循环。

import os
import urllib
import datetime
from bs4 import BeautifulSoup
import time


def soup():
    url = "http://www.investing.com/indices/major-indices"
    req = urllib.request.Request(
    url,
    data=None,
    headers={
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
        'Connection': 'keep-alive'    }
           )
    global Ltp
    global html
    while True:
        html = urllib.request.urlopen(req)
        ok = html.read().decode('utf-8')
        bsobj = BeautifulSoup(ok, "lxml")

        Ltp = bsobj.find("td", {"class":"pid-169-last"} )
        Ltp = (Ltp.text)
        Ltp = Ltp.replace(',' , '');
        os.system('cls')
        Ltp = float(Ltp)
        print (Ltp, datetime.datetime.now())
        time.sleep(3)

soup()

结果:

sh: cls: command not found
18351.61 2016-08-31 23:44:28.103531
sh: cls: command not found
18351.54 2016-08-31 23:44:36.257327
sh: cls: command not found
18351.61 2016-08-31 23:44:47.645328
sh: cls: command not found
18351.91 2016-08-31 23:44:55.618970
sh: cls: command not found
18352.67 2016-08-31 23:45:03.842745

重新分配html等于那么UTF-8字符串响应不停的叫唤它像它的一个IO ...此代码不会在每个循环服务器获取新的数据, read简单地读取来自字节IO对象,它不使新的要求。

您可以使用Requests库加快处理速度,并利用持久连接(或直接使用urllib3)

试试这个(您将需要pip install requests

import os
import datetime

from requests import Request, Session
from bs4 import BeautifulSoup

s = Session()

while True:
  resp = s.get("http://www.investing.com/indices/major-indices")
  bsobj = BeautifulSoup(resp.text, "html.parser")   
  Ltp = bsobj.find("td", {"class":"pid-169-last"} )
  Ltp = (Ltp.text)
  Ltp = Ltp.replace(',' , '');
  os.system('cls')     
  Ltp = float(Ltp)
  print (Ltp, datetime.datetime.now())   

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM