[英]Web Scraping - Requests ConnectionError: ('Connection aborted.', OSError(“(60, 'ETIMEDOUT')”,))
I'm trying to access to a webpage.我正在尝试访问网页。 I tried 'UserAgent' to add headers, however, got timeout error: my new codes:
我尝试“UserAgent”添加标题,但是出现超时错误:我的新代码:
from fake_useragent import UserAgent
import requests
url = "https://www.bestbuy.com/site/lg-65-class-oled-b9-series-2160p-smart-4k-uhd-tv-with-hdr/6360611.p?skuId=6360611"
ua = UserAgent()
print(ua.chrome)
header = {'User-Agent':str(ua.chrome)}
print(header)
url_get = requests.get(url, headers=header)
print(url_get)
--> 285 raise SocketError(str(e)) 286 except OpenSSL.SSL.ZeroReturnError as e: --> 285 引发 SocketError(str(e)) 286 除了 OpenSSL.SSL.ZeroReturnError 为 e:
OSError: (60, 'ETIMEDOUT')操作系统错误:(60,'ETIMEDOUT')
During handling of the above exception, another exception occurred:在处理上述异常的过程中,又出现了一个异常:
ProtocolError Traceback (most recent call last) /anaconda3/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 439 retries=self.max_retries, --> 440 timeout=timeout 441 ) ProtocolError Traceback(最近一次调用最后一次)/anaconda3/lib/python3.6/site-packages/requests/adapters.py 在发送(自我,请求,stream,超时,验证,证书,代理)439 次重试=self.max_retries, --> 440 超时=超时 441 )
--> 285 raise SocketError(str(e)) 286 except OpenSSL.SSL.ZeroReturnError as e: --> 285 引发 SocketError(str(e)) 286 除了 OpenSSL.SSL.ZeroReturnError 为 e:
ProtocolError: ('Connection aborted.', OSError("(60, 'ETIMEDOUT')",)) ProtocolError: ('Connection aborted.', OSError("(60, 'ETIMEDOUT')",))
During handling of the above exception, another exception occurred:在处理上述异常的过程中,又出现了一个异常:
You dont to need use fake_useragent, just try some like this...pass agent cookies variable to request您不需要使用 fake_useragent,只需尝试一些这样的...通过代理 cookies 变量来请求
import requests
url = "https://www.bestbuy.com/site/lg-65-class-oled-b9-series-2160p-smart-4k-uhd-tv-with-hdr/6360611.p?skuId=6360611"
agent = {"User-Agent":'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36'}
cookies = {"cookie":"COPY_HERE_YOUR_COOKIE_FROM_BROWSER"}
url_get = requests.get(url,headers=agent, cookies=cookies)
print(url_get.text)
If you dont know how get the cookies, just click right in your browser (Chrome example) -> Inspect > Network...and when you load the web look the first request and look the headers.如果您不知道如何获取 cookies,只需在浏览器中单击右键(Chrome 示例)-> 检查 > 网络...当您加载 web 时,请查看第一个请求并查看标头。 This code work for me.
这段代码对我有用。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.