繁体   English   中英

即使指定了用户代理字符串,Beautiful Soup scraper 也会给出“拒绝访问”

[英]Beautiful Soup scraper gives "Access Denied" even though user-agent string is specified

我正在尝试抓取这个网站: https ://www.ralphlauren.com/men?webcat=men 但我没有得到链接的 HTML,而是得到一个 HTML,其中显示Access to this page has been denied

我尝试按照此处的建议在标题中使用用户代理字符串解决方案: Python 中的 Scraper 给出“访问被拒绝”

但我仍然得到同样的错误。

电流输出:

  <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<meta content="width=device-width, initial-scale=1" name="viewport"/>
<title>Access to this page has been denied.</title>
<link href="https://fonts.googleapis.com/css?family=Open+Sans:300" rel="stylesheet"/>

我的代码:

from bs4 import BeautifulSoup as soup
import requests
url = "https://www.ralphlauren.com/men?webcat=men"

def make_soup(url):
  header = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"}
  page = requests.get(url,headers=header)
  page_soup = soup(page.content, 'lxml')
  return page_soup 

print(make_soup(url))

该网站处于 cloudflare 保护中。 您可以使用cloudscraper代替requests

from bs4 import BeautifulSoup
import cloudscraper
scraper = cloudscraper.create_scraper(delay=10,   browser={'custom': 'ScraperBot/1.0',})
url = 'https://www.ralphlauren.com/men?webcat=men'
req = scraper.get(url)
print(req)
soup = BeautifulSoup(req.text,'lxml')

输出:

<Response [200]>

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM