简体   繁体   中英

Python requests raises timeout error on a website that works on a browser

I'm working on a python script that periodically checks many sites, and sends warnings when a site doesn't work for too long.

We recieved warnings for some sites, but when we checked it with classical browsers (lastest Firefox version) it actually works.

So I tried sending a basic request like this:

>>> from requests import get
>>> address = 'sites address'
>>> get(url=address, verify=False)

And then, I got a timeout error (TimeoutError [WinError10060]).

Does anyone has a clue about why this happened?

Recently, I also encountered that problem. I was behind a proxy server and because my company have set it up (ofc) in the browser, it works in the browser. What I did is passing the proxies to requests.get() :

import requests

http_proxy = "proxy_server_url"
https_proxy = "proxy_server_url"   # could be the same as the http proxy

proxies = {"http": http_proxy, "https": https_proxy}
url = "http://example.com"
response = requests.get(url, proxies=proxies)

Since this works when you're behind a proxy, it doesn't if not. To make this still work, even you're not behind a proxy, make a check:

import requests
import urllib
from urllib import request, error

def check_proxy(self, http_proxy, https_proxy):
    is_bad_proxy = False
    try:
        proxy_handler = urllib.request.ProxyHandler({"http": http_proxy, "https": https_proxy})
        opener = urllib.request.build_opener(proxy_handler)
        opener.addheaders = [('User-agent', 'Mozilla/5.0')]
        urllib.request.install_opener(opener)
        req = urllib.request.Request("https://google.com")  # change the URL to test here
        sock = urllib.request.urlopen(req)
    except Exception:
        is_bad_proxy = True
    
    if is_bad_proxy:
        proxies = None
    else:
        proxies = {"http": HTTP_PROXY, "https": HTTPS_PROXY}
    
    return proxies

http_proxy = "proxy_server_url"
https_proxy = "proxy_server_url"

proxies = check_proxy(http_proxy, https_proxy)
url = "http://example.com"
response = requests.get(url, proxies=proxies)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM