繁体   English   中英

使用 Python 2.7 从 HTML 字符串中提取文件名

[英]Extracing filenames from a string of HTML with Python 2.7

我正在使用BeautifulSoup解析 HTML 文档。

from bs4 import BeautifulSoup
import requests
import re
page = requests.get("http://www.crmpicco.co.uk/?page_id=82&lottoId=27")

soup = BeautifulSoup(page.content, 'html.parser')
entry_content = soup.find_all('div', class_='entry-content')

print(entry_content[1])

这给了我这个字符串:

<div class="entry-content"><span class="red">Week 27: </span><br/><br/>Saturday 1st February 2020<br/>(in red)<br/><br/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/lotto_balls/17.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/21.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/31.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/47.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/lotto2010/images/balls/bonus43.gif" vspace="12" width="70"/><br/><br/>Wednesday 5th February 2020<br/><br/><strong><span class="red">RESULTS NOT AVAILABLE</span></strong><br/><br/><br/><br/><a href="?page_id=82">Click here</a> to see other results.<br/> </div>

我想获取字符串中每个 gif 路径的文件名,我(认为)正则表达式模块中的findall方法就是这样做的方法,但我没有取得太大的成功。

这样做的最佳方法是什么? 可以通过 BeautifulSoup 一次调用完成吗?

我建议使用标准库中的HTMLParser类( python2 / python3 )而不是正则表达式。 它有一个handle_starttag方法,它被调用来处理标签的开始。

>>> source = "\n".join(entry_content) # I assume "entry_content" is a list of div elements.
>>>
>>> try:
...     from HTMLParser import HTMLParser # python 2
... except ImportError:
...     from html.parser import HTMLParser
...
>>> class SrcParser(HTMLParser):
...     def __init__(self, *args, **kwargs):
...         self.links = []
...         self._basename = kwargs.pop('only_basename', False)
...         super(SrcParser, self).__init__(*args, **kwargs)
...
...     def handle_starttag(self, tag, attrs):
...         for attr, val in attrs:
...             if attr == 'src' and val.endswith("gif"):
...                 if self._basename:
...                     import os.path
...                     val = os.path.splitext(os.path.basename(val))[0]
...                 self.links.append(val)
...
>>> source_parser = SrcParser()
>>> source_parser.feed(source)
>>> print(*source_parser.links, sep='\n')
http://www.crmpicco.co.uk/wp-content/themes/2010/images/lotto_balls/17.gif
http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/21.gif
http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/31.gif
http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/47.gif
http://www.crmpicco.co.uk/wp-content/themes/lotto2010/images/balls/bonus43.gif
>>>
>>> source_parser = SrcParser(only_basename=True)
>>> source_parser.feed(source)
>>> print(*source_parser.links, sep='\n')
17
21
31
47
bonus43

我在您的页面上找不到任何带有entry-content div,但这应该可行。 col-md-4更改为entry-content

# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
import re


page = requests.get("http://www.crmpicco.co.uk/?page_id=82&lottoId=27")

soup = BeautifulSoup(page.content, 'html.parser')

for entry_content in soup.find_all('div',class_='col-md-4'):
    print(entry_content.img['src'].rsplit('/', 1)[-1].split('.')[0])
zce
691505
gaiq

我推荐另一种与 Python 2 和 Python 3 兼容的解决方案,非常适合从 XML 中提取数据。

from simplified_scrapy.simplified_doc import SimplifiedDoc
html = '''
<div class="entry-content"><span class="red">Week 27: </span><br/><br/>Saturday 1st February 2020<br/>(in red)<br/><br/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/lotto_balls/17.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/21.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/31.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/47.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/lotto2010/images/balls/bonus43.gif" vspace="12" width="70"/><br/><br/>Wednesday 5th February 2020<br/><br/><strong><span class="red">RESULTS NOT AVAILABLE</span></strong><br/><br/><br/><br/><a href="?page_id=82">Click here</a> to see other results.<br/> </div>
'''
doc = SimplifiedDoc(html)
div = doc.select('div.entry-content')
srcs = div.selects('img>src()')
print (srcs)
print ([src.rsplit('/', 1)[-1].split('.')[0] for src in srcs])

结果:

['http://www.crmpicco.co.uk/wp-content/themes/2010/images/lotto_balls/17.gif', 'http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/21.gif', 'http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/31.gif', 'http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/47.gif', 'http://www.crmpicco.co.uk/wp-content/themes/lotto2010/images/balls/bonus43.gif']
['17', '21', '31', '47', 'bonus43']

这里有更多例子: https : //github.com/yiyedata/simplified-scrapy-demo/blob/master/doc_examples/

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM