简体   繁体   中英

Download all Images in a Web directory

I am trying to gather all images in a specific Directory on my webserver, using BeautifulSoup4.

So far I got this code,

from init import *
from bs4 import BeautifulSoup
import urllib
import urllib.request
# use this image scraper from the location that 
#you want to save scraped images to

def make_soup(url):
    html = urllib.request.urlopen(url)
    return BeautifulSoup(html, features="html.parser")

def get_images(url):
    soup = make_soup(url)
    #this makes a list of bs4 element tags
    images = [img for img in soup.findAll('img')]
    print (str(len(images)) + "images found.")
    print ('Downloading images to current working directory.')
    #compile our unicode list of image links
    image_links = [each.get('src') for each in images]
    for each in image_links:
        filename=each.split('/')[-1]
        urllib.request.Request(each, filename)
    return image_links

#a standard call looks like this
get_images('https://omabilder.000webhostapp.com/img/')

This however, spits out the following error

7images found.
Downloading images to current working directory.
Traceback (most recent call last):
  File "C:\Users\MyPC\Desktop\oma projekt\getpics.py", line 1, in <module>
    from init import *
  File "C:\Users\MyPC\Desktop\oma projekt\init.py", line 9, in <module>
    from getpics import *
  File "C:\Users\MyPC\Desktop\oma projekt\getpics.py", line 26, in <module>
    get_images('https://omabilder.000webhostapp.com/img/')
  File "C:\Users\MyPC\Desktop\oma projekt\getpics.py", line 22, in get_images
    urllib.request.Request(each, filename)
  File "C:\Users\MyPC\AppData\Local\Programs\Python\Python37-32\lib\urllib\request.py", line 328, in __init__
    self.full_url = url
  File "C:\Users\MyPC\AppData\Local\Programs\Python\Python37-32\lib\urllib\request.py", line 354, in full_url
    self._parse()
  File "C:\Users\MyPC\AppData\Local\Programs\Python\Python37-32\lib\urllib\request.py", line 383, in _parse
    raise ValueError("unknown url type: %r" % self.full_url)
ValueError: unknown url type: '/icons/blank.gif'

What I do not understand is the following,

There is no GIF in the Directory and no /icon/ subdirectory. Furthermore it spits out 7 images were found, when there are only like 3 uploaded to the website.

The gif s are the icons next to the links on your website (tiny ~20x20 px images). They're actually shown on the website. If I understand correctly, you want to download the png images -- these are links, rather than images at the url you've provided.

If you want to download the png images from the links, then you can use something like this:

from bs4 import BeautifulSoup
import urllib
import urllib.request
import os
# use this image scraper from the location that 
#you want to save scraped images to

def make_soup(url):
    html = urllib.request.urlopen(url)
    return BeautifulSoup(html, features="html.parser")

def get_images(url):
    soup = make_soup(url)
    # get all links (start with "a")
    images  = [link["href"] for link in soup.find_all('a', href=True)]
    # keep ones that end with png
    images = [im for im in images if im.endswith(".png")]    
    print (str(len(images)) + " images found.")
    print ('Downloading images to current working directory.')
    #compile our unicode list of image links
    for each in images:
        urllib.request.urlretrieve(os.path.join(url, each), each)
    return images

# #a standard call looks like this
get_images('https://omabilder.000webhostapp.com/img/')

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM