[英]Download all Images in a Web directory
I am trying to gather all images in a specific Directory on my webserver, using BeautifulSoup4. 我正在尝试使用BeautifulSoup4在我的Web服务器上的特定目录中收集所有图像。
So far I got this code, 到目前为止,我已经收到了这段代码,
from init import *
from bs4 import BeautifulSoup
import urllib
import urllib.request
# use this image scraper from the location that
#you want to save scraped images to
def make_soup(url):
html = urllib.request.urlopen(url)
return BeautifulSoup(html, features="html.parser")
def get_images(url):
soup = make_soup(url)
#this makes a list of bs4 element tags
images = [img for img in soup.findAll('img')]
print (str(len(images)) + "images found.")
print ('Downloading images to current working directory.')
#compile our unicode list of image links
image_links = [each.get('src') for each in images]
for each in image_links:
filename=each.split('/')[-1]
urllib.request.Request(each, filename)
return image_links
#a standard call looks like this
get_images('https://omabilder.000webhostapp.com/img/')
This however, spits out the following error 但是,这会吐出以下错误
7images found.
Downloading images to current working directory.
Traceback (most recent call last):
File "C:\Users\MyPC\Desktop\oma projekt\getpics.py", line 1, in <module>
from init import *
File "C:\Users\MyPC\Desktop\oma projekt\init.py", line 9, in <module>
from getpics import *
File "C:\Users\MyPC\Desktop\oma projekt\getpics.py", line 26, in <module>
get_images('https://omabilder.000webhostapp.com/img/')
File "C:\Users\MyPC\Desktop\oma projekt\getpics.py", line 22, in get_images
urllib.request.Request(each, filename)
File "C:\Users\MyPC\AppData\Local\Programs\Python\Python37-32\lib\urllib\request.py", line 328, in __init__
self.full_url = url
File "C:\Users\MyPC\AppData\Local\Programs\Python\Python37-32\lib\urllib\request.py", line 354, in full_url
self._parse()
File "C:\Users\MyPC\AppData\Local\Programs\Python\Python37-32\lib\urllib\request.py", line 383, in _parse
raise ValueError("unknown url type: %r" % self.full_url)
ValueError: unknown url type: '/icons/blank.gif'
What I do not understand is the following, 我不明白的是以下内容,
There is no GIF
in the Directory and no /icon/
subdirectory. 目录中没有GIF
,也没有/icon/
子目录。 Furthermore it spits out 7 images were found, when there are only like 3 uploaded to the website. 此外,当只有3张图片上传到网站时,它吐出7张图片。
The gif
s are the icons next to the links on your website (tiny ~20x20 px images). gif
是网站上链接旁边的图标(很小的〜20x20 px图像)。 They're actually shown on the website. 它们实际上显示在网站上。 If I understand correctly, you want to download the png images -- these are links, rather than images at the url you've provided. 如果我理解正确,则您想下载png图片-这些是链接,而不是您提供的url中的图片。
If you want to download the png images from the links, then you can use something like this: 如果要从链接下载png图像,则可以使用如下所示的内容:
from bs4 import BeautifulSoup
import urllib
import urllib.request
import os
# use this image scraper from the location that
#you want to save scraped images to
def make_soup(url):
html = urllib.request.urlopen(url)
return BeautifulSoup(html, features="html.parser")
def get_images(url):
soup = make_soup(url)
# get all links (start with "a")
images = [link["href"] for link in soup.find_all('a', href=True)]
# keep ones that end with png
images = [im for im in images if im.endswith(".png")]
print (str(len(images)) + " images found.")
print ('Downloading images to current working directory.')
#compile our unicode list of image links
for each in images:
urllib.request.urlretrieve(os.path.join(url, each), each)
return images
# #a standard call looks like this
get_images('https://omabilder.000webhostapp.com/img/')
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.