简体   繁体   English

HTTP错误403:禁止urllib2请求

[英]HTTP Error 403: Forbidden on urllib2 request

I am using python's urllib2 and bs4. 我正在使用python的urllib2和bs4。 But urllib2 is running into some issues. 但是urllib2遇到了一些问题。 Certain sites such as: http://dannijo.com/jewelry/necklaces/paloma.html 某些网站,例如: http : //dannijo.com/jewelry/necklaces/paloma.html

http://www.freepeople.com/ http://www.freepeople.com/

only return the error show below 只返回下面显示的错误

HTTP Error 403: Forbidden

I have seen this question on stack overflow: urllib2.HTTPError: HTTP Error 403: Forbidden . 我已经在堆栈溢出中看到了以下问题: urllib2.HTTPError:HTTP错误403:禁止访问 But the hdrs which they suggest do not get past the 403 Forbidden. 但是他们建议的hdrs不能超过403 Forbidden。

If anyone knows a better hdr or if they can let me know what is causing this issue it would much appreciated. 如果有人知道更好的hdr,或者他们可以让我知道导致此问题的原因,将不胜感激。

This is the code that I currently have: 这是我目前拥有的代码:

    hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
       'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
       'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
       'Accept-Encoding': 'none',
       'Accept-Language': 'en-US,en;q=0.8',
       'Connection': 'keep-alive'} 
    req = urllib2.Request(url,headers=hdr)
    page = urllib2.urlopen(url)
    soup = BeautifulSoup(page.read())

You don't actually use req instance. 您实际上并没有使用req实例。 So do the following: 因此,请执行以下操作:

soup = BeautifulSoup(urllib2.urlopen(req).read())

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM