简体   繁体   中英

Cannot get an image from requests.get

I have 2 images that I want to process with some logic in python. Here the 2 URLs to the images:

https://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png https://www.hogan.com/fashion/hogan/HXW4350DM10NCR0RSZ/HXW4350DM10NCR0RSZ-02.png

To get these images I wrote the following script:

import requests
from PIL import Image
from io import BytesIO

url = "https://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png"

response = requests.get(url)
img = Image.open(BytesIO(response.content))

img.show()

This piece of code is ok and I correctly get the image.

But, with the second image, I cannot receive any response from the get method.


url = "https://www.hogan.com/fashion/hogan/HXW4350DM10NCR0RSZ/HXW4350DM10NCR0RSZ-02.png"

response = requests.get(url)
img = Image.open(BytesIO(response.content))

img.show()

Any help would be appreciated.

python 3.9.4 requests 2.25.1

That's because the second url requires an important header parameter , the user-agent

Let's add it into your request:

import requests
from PIL import Image
from io import BytesIO

url = "https://www.hogan.com/fashion/hogan/HXW4350DM10NCR0RSZ/HXW4350DM10NCR0RSZ-02.png"

headers = {
    "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36"
}
response = requests.get(url, headers=headers)
img = Image.open(BytesIO(response.content))

img.show()

How do we know if some website requires User-agent?
We just don't know, but we assume that if the browser can get the image properly and a simple request does not, it is missing something, and most of the websites requires headers to validate your request

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM