繁体   English   中英

如何使用python 2.7从url下载所有图像-问题

[英]How to download all images from url with python 2.7 - Problem

我尝试使用以下代码从“ https://www.nytimes.com/section/todayspaper ”下载所有图像:

import requests
from io import open as iopen
from urlparse import urlsplit

file_url= 'https://www.nytimes.com/section/todayspaper'
def requests_image(file_url):
    suffix_list = ['jpg', 'gif', 'png', 'tif', 'svg',]
    file_name =  urlsplit(file_url)[2].split('/')[-1]
    file_suffix = file_name.split('.')[1]
    i = requests.get(file_url)
    if file_suffix in suffix_list and i.status_code == requests.codes.ok:
        with iopen(file_name, 'wb') as file:
            file.write(i.content)
    else:
        return False

运行它时没有错误发生:

>>> 
>>> 

但是我不知道图像在PC上的哪里下载?

我检查了下载文件夹,但它们不存在。

如果要下载页面中的所有图像,应:

  • 下载网页
  • 查找所有图像标签( <img>
  • 扫描所有图像标签并找到src属性内容
  • 从已建立的链接下载所有文件

import os
import hashlib

import requests
from bs4 import BeautifulSoup


page_url = 'https://www.nytimes.com/section/todayspaper'

# Download page html 
page_data = requests.get(page_url).text

# Find all links in page
images_urls = [
    image.attrs.get('src')
    for image in BeautifulSoup(page_data, 'lxml').find_all('img')
]

# Clean empty links (<img src="" /> <img> etc)
images_urls = [
    image_url
    for image_url in images_urls
    if image_url and len(image_url)>0
]

# Download files
def download_image(source_url, dest_dir):
    # TODO: add filename extension
    image_name = hashlib.md5(source_url.encode()).hexdigest()

    with open(os.path.join(dest_dir, image_name), 'wb') as f:
        image_data = requests.get(source_url).content
        f.write(image_data)


for image_url in images_urls:
    download_image(image_url, './tmp')

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM