简体   繁体   中英

Error while scraping image with beautifulsoup

The original code is here : https://github.com/amitabhadey/Web-Scraping-Images-using-Python-via-BeautifulSoup-/blob/master/code.py

So i am trying to adapt a Python script to collect pictures from a website to get better at web scraping.

I tried to get images from " https://500px.com/editors "

The first error was

The code that caused this warning is on line 12 of the file/Bureau/scrapper.py. To get rid of this warning, pass the additional argument 'features="lxml"' to the BeautifulSoup constructor.

So I did :

soup = BeautifulSoup(plain_text, features="lxml")

I also adapted the class to reflect the tag in 500px.

But now the script stopped running and nothing happened.

In the end it looks like this :

import requests 
from bs4 import BeautifulSoup 
import urllib.request
import random 

url = "https://500px.com/editors"

source_code = requests.get(url)

plain_text = source_code.text

soup = BeautifulSoup(plain_text, features="lxml")

for link in soup.find_all("a",{"class":"photo_link "}):
    href = link.get('href')
    print(href)

    img_name = random.randrange(1,500)

    full_name = str(img_name) + ".jpg"

    urllib.request.urlretrieve(href, full_name)

    print("loop break")

What did I do wrong?

Actually the website is loaded via JavaScript using XHR request to the following API

So you can reach it directly via API .

Note that you can increase parameter rpp=50 to any number as you want for getting more than 50 result.

import requests

r = requests.get("https://api.500px.com/v1/photos?rpp=50&feature=editors&image_size%5B%5D=1&image_size%5B%5D=2&image_size%5B%5D=32&image_size%5B%5D=31&image_size%5B%5D=33&image_size%5B%5D=34&image_size%5B%5D=35&image_size%5B%5D=36&image_size%5B%5D=2048&image_size%5B%5D=4&image_size%5B%5D=14&sort=&include_states=true&include_licensing=true&formats=jpeg%2Clytro&only=&exclude=&personalized_categories=&page=1&rpp=50").json()

for item in r['photos']:
    print(item['url'])

also you can access the image url itself in order to write it directly!

import requests

r = requests.get("https://api.500px.com/v1/photos?rpp=50&feature=editors&image_size%5B%5D=1&image_size%5B%5D=2&image_size%5B%5D=32&image_size%5B%5D=31&image_size%5B%5D=33&image_size%5B%5D=34&image_size%5B%5D=35&image_size%5B%5D=36&image_size%5B%5D=2048&image_size%5B%5D=4&image_size%5B%5D=14&sort=&include_states=true&include_licensing=true&formats=jpeg%2Clytro&only=&exclude=&personalized_categories=&page=1&rpp=50").json()

for item in r['photos']:
    print(item['image_url'][-1])

Note that image_url key hold different img size. so you can choose your preferred one and save it. here I've taken the big one.

Saving directly:

import requests

with requests.Session() as req:
    r = req.get("https://api.500px.com/v1/photos?rpp=50&feature=editors&image_size%5B%5D=1&image_size%5B%5D=2&image_size%5B%5D=32&image_size%5B%5D=31&image_size%5B%5D=33&image_size%5B%5D=34&image_size%5B%5D=35&image_size%5B%5D=36&image_size%5B%5D=2048&image_size%5B%5D=4&image_size%5B%5D=14&sort=&include_states=true&include_licensing=true&formats=jpeg%2Clytro&only=&exclude=&personalized_categories=&page=1&rpp=50").json()
    result = []
    for item in r['photos']:
        print(f"Downloading {item['name']}")
        save = req.get(item['image_url'][-1])
        name = save.headers.get("Content-Disposition")[9:]
        with open(name, 'wb') as f:
            f.write(save.content)

Looking at the page you're trying to scrape I noticed something. The data doesn't appear to load until a few moments after the page finishes loading. This tells me that they're using a JS framework to load the images after page load.

Your scraper will not work with this page due to the fact that it does not run JS on the pages it's pulling. Running your script and printing out what plain_text contains proves this:

<a class='photo_link {{#if hasDetailsTooltip}}px_tooltip{{/if}}' href='{{photoUrl}}'>

If you look at the href attribute on that tag you'll see it's actually a templating tag used by JS UI frameworks.

Your options now are to either see what APIs they're calling to get this data (check the inspector in your web browser for network calls, if you're lucky they may not require authentication) or to use a tool that runs JS on pages. One tool I've seen recommended for this is selenium , though I've never used it so I'm not fully aware of its capabilities; I imagine the tooling around this would drastically increase the complexity of what you're trying to do.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM