簡體   English   中英

使用bs4和Python從網頁中提取

[英]Extract from webpage using bs4 and Python

我如何從以下網站的“當前流編號:1”中提取數字1,到目前為止我使用python和bs4進行的嘗試未成功

我要抓取的頁面源

<head><link href="basic.css" rel="stylesheet" type="text/css"></head>
<body>
<p><b>STATUS</b><br>
<p><b>Device information:</b><br>
Hardware type:  
Exstreamer 110
 (ID 20)<br>
<br>
Firmware: Streaming Client<br>
FW version: B2.17&nbsp;-&nbsp;31/05/2010 (dd/mm/yyyy)<br>
WEB version: 04.00<br>
Bootloader version: 99.19<br>
Setup version: 01.02<br>
Sg version: A8.05&nbsp;-&nbsp;May 31 2010<br>
Fs version: A2.05&nbsp;-&nbsp;31/05/2010 (dd/mm/yyyy)<br>
<p><b>System status:</b><br>
Ticks: 1588923494 ms<br>
Uptime: 10178858 s<br>
<p><b>Streaming status:</b><br>
Volume: 90%<br>
Shuffle:   Off<br>
Repeat:   Off<br>
Output peak level L: -63dBFS<br>
Output peak level R: -57dBFS<br>
Buffer level: 65532 bytes<br>
RTP decoder latency: 0 ms; average 0 ms<br>
Current stream number:   1   <br>
Current URL: http://listen.qkradio.com.au:8382/listen.mp3<br>
Current channel: 0<br>
Stream bitrate: 32 kbps<br>

碼:

from bs4 import BeautifulSoup
import urllib2
import lxml

SERVER = 'http://xx.xx.xx.xx:8080/ixstatus.html'
authinfo = urllib2.HTTPPasswordMgrWithDefaultRealm()
authinfo.add_password(None, SERVER, 'user', 'password')
page = 'http://xxx.xxx.xxx.xxx:8080/ixstatus.html'
handler = urllib2.HTTPBasicAuthHandler(authinfo)
myopener = urllib2.build_opener(handler)
opened = urllib2.install_opener(myopener)
output = urllib2.urlopen(page)
#print output.read()
soup = BeautifulSoup(output.read(), "lxml")
#print(soup)

print "stream number:", soup.select('Current stream number')[0].text

您的select調用使BS4使用CSS選擇器查找不存在的內容。 <Current>元素內的<stream>中的<number>

由於html代碼沒有class或id屬性,因此可以用來查找所需的數據。 您(最好)的最佳選擇是瀏覽段落並查找子字符串,例如: Current stream number: some_number使用正則表達式的Current stream number: some_number

這是我的處理方式:

import re
import bs4

page = "html code to scrape"

# this pattern will be used to find data we want
pattern = r'\s*Current\s+stream\s+number:\s*(\d+)'

soup = bs4.BeautifulSoup(page, 'lxml')

paragraphs = soup.findAll('p')
data = []
for para in paragraphs:
    found = re.finditer(pattern, para.text, re.IGNORECASE);

    data.extend([x.group(1) for x in found])


print(data)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM