[英]Scraping all the images from a specific part of a webpage using BeautifulSoup
Object '图库' 是我得到的 - 我怎么能只是 select 图像网址而不走很长的路。
目前,我正在执行以下操作
from bs4 import BeautifulSoup
from PIL import Image
import requests
gallery = soup.findAll(class_='gallery')
img_0 = gallery[0].find('img')
img_1 = gallery[1].find('img')
...
img_x = gallery[x].find('img')
img_url_0 = img_0['src']
img_url_1 = img_1['src']
...
img_url_x = img_x['src']
gallery_img_0 = Image.open(requests.get(img_url_0, stream = True).raw)
gallery_img_1 = Image.open(requests.get(img_url_1, stream = True).raw)
...
gallery_img_x = Image.open(requests.get(img_url_x, stream = True).raw)
其中 x 是可迭代库的长度。
也许是一个循环? :秒
谢谢,CN
您可以使用嵌套循环加载所有图像并将它们存储到列表中。 例如:
galleries = soup.findAll(class_='gallery')
all_images = []
for gallery in galleries:
for img in gallery.findAll('img'):
gallery_img = Image.open(requests.get(img['src'], stream = True).raw)
all_images.append(gallery_img)
# here, `all_images` contains all images
# ...
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.