简体   繁体   中英

Python requests Too Many Redirects during web scraping?

So i want to web scrape one site, but when i iterate over the results pages after few request(about 30 max.) requests.get throws this error:

requests.exceptions.TooManyRedirects: Exceeded 30 redirects

The search url gets redirected to the main page url and each next url acts the same until I connect to different VPN. Even when i am spoofing user agent and rotating proxies from a list of free proxies it still gets redirected after few requests. I have never had a problem during web scraping like that before. What is the best way to bypass this "redirect block"? allow_redirects=False doesn't work here too.

import requests
import random
import time

agents = [...] # List of user agents

for i in range(1,100):
    url = "https://panoramafirm.pl/odpady/firmy,{}.html".format(i)
    r = requests.get(url, headers={"User-Agent": random.choice(agents)})
    print(r.status_code)
    time.sleep(random.randint(10,15))

由于您正在使用requests ,因此可以使用allow_redirects=False选项。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM