简体   繁体   中英

Can't scrape some static image links from a webpage using requests

I'm trying to scrape the images from the landing page of a website. All the images are within search_results class name. When I run the script below, I get no results. I checked the status_code and could notice that the script is getting 403 .

website link

How can I scrape the image links using requests as the images are static and available in the page source?

import requests
from bs4 import BeautifulSoup

url = 'https://pixabay.com/images/search/office/'

headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36',
}

r = requests.get(url,headers=headers)
print(r.status_code)
soup = BeautifulSoup(r.text,"lxml")
for item in soup.select(".search_results a > img[src]"):
    print(item.get("src"))

Any solution which is related to any browser simulator, as in selenium is not what I'm looking for.

This uses Selenium . For some reason, however, it does not seem to find the images in headless mode:

from selenium import webdriver
from bs4 import BeautifulSoup


options = webdriver.ChromeOptions()
#options.add_argument("headless")
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options)
try:
    driver.implicitly_wait(3)
    driver.get('https://pixabay.com/images/search/office')
    images = driver.find_elements_by_css_selector('.search_results a > img[src]') # wait for images to show up
    soup = BeautifulSoup(driver.page_source, 'lxml')
    for item in soup.select(".search_results a > img[src]"):
        print(item.get("src"))
finally:
    driver.quit()

Prints:

https://cdn.pixabay.com/photo/2016/03/09/09/22/workplace-1245776__340.jpg
https://cdn.pixabay.com/photo/2015/01/08/18/26/write-593333__340.jpg
https://cdn.pixabay.com/photo/2015/02/02/11/09/office-620822__340.jpg
https://cdn.pixabay.com/photo/2014/05/02/21/50/home-office-336378__340.jpg
https://cdn.pixabay.com/photo/2016/02/19/11/19/office-1209640__340.jpg
https://cdn.pixabay.com/photo/2015/02/02/11/08/office-620817__340.jpg
https://cdn.pixabay.com/photo/2016/03/26/13/09/cup-of-coffee-1280537__340.jpg
https://cdn.pixabay.com/photo/2017/05/11/11/15/workplace-2303851__340.jpg
https://cdn.pixabay.com/photo/2015/01/09/11/08/startup-594090__340.jpg
https://cdn.pixabay.com/photo/2015/01/08/18/25/startup-593327__340.jpg
https://cdn.pixabay.com/photo/2015/01/08/18/27/startup-593341__340.jpg
https://cdn.pixabay.com/photo/2014/05/02/21/49/home-office-336373__340.jpg
https://cdn.pixabay.com/photo/2015/01/09/11/11/office-594132__340.jpg
https://cdn.pixabay.com/photo/2017/05/04/16/37/meeting-2284501__340.jpg
https://cdn.pixabay.com/photo/2014/05/03/01/03/macbook-336704__340.jpg
https://cdn.pixabay.com/photo/2018/01/11/21/27/desk-3076954__340.jpg
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif
/static/img/blank.gif

This page uses JavaScript and Cookies and this makes problems. It also checks other headers, not only User-Agent .

First: you have to use requests.Session() to keep cookies. Second: you have to load some page (ie. main page) to get these cookies. When you will have cookies then it will accept other URLs. Third: it check also other headers to send cookies.

I run page in browser and use DevTools in Chrome/Firefox to copy all headers used by real browser and I start testing request with different headers. Finally I found it needs

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36',
    'Accept-Language': 'en-US;q=0.7,en;q=0.3',
    'Cache-Control': 'no-cache',
}

Other problem is that page uses JavaScript to load images when you scroll page ("lazy-loading") and some url are not in scr but in data-lazy and then src has 'blank.gif'


import requests
from bs4 import BeautifulSoup

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36',
    #"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
    #"Accept-Encoding": "gzip, deflate, br",
    "Accept-Language": "en-US;q=0.7,en;q=0.3",
    "Cache-Control": "no-cache",
    #"Connection": "keep-alive",
    #"Pragma": "no-cache",
}

s = requests.Session()
s.headers.update(headers)  # it will use there hearders in all requests

# --- get cookies ---

url = 'https://pixabay.com/'

r = s.get(url)
print(r.status_code)  # 403 but it is not problem

# only for test 
#r = s.get(url)
#print(r.status_code)  # 200 because it already have cookies

# --- get images ---

url = 'https://pixabay.com/images/search/office/'

r = s.get(url)
print(r.status_code)
#print(r.text)

results = []

soup = BeautifulSoup(r.text, "lxml")

for item in soup.select(".search_results a > img[src]"):
    src = item.get("src")
    if src is not None and 'blank.gif' not in src:
        print('src:', src)
        results.append(src)
    else:
        src = item.get("data-lazy")
        print('data-lazy:', src)
        results.append(src)

print('len:', len(results))

It looks like Pixabay is using Cloudflare's Web Application Firewall (WAF) or similar. This is quite tedious to get around manually.

cloudflare-scrape is a library that might be of assistance: https://github.com/Anorov/cloudflare-scrape

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM