[英]Two almost identical codes, one works but the other doesn't
I dont know why the first code works but second doesnt.我不知道为什么第一个代码有效,但第二个没有。 After "adidas" code i gets answers "connection aborted, OSError 10054".
在“adidas”代码之后,我得到了“连接中止,OSError 10054”的答案。 I'v heard something about API on websites, to be honest i dont know what is it but i fell thats related :D
我在网站上听说过 API,老实说,我不知道它是什么,但我认为这是相关的 :D
IT WORKS:有用:
import requests
from bs4 import BeautifulSoup
odpowiedz = requests.get("https://www.nike.com/pl/w?q=react%20270&vst=react%20270")
soup = BeautifulSoup(odpowiedz.text, 'html.parser')
IT DOESN'T WORK:它不起作用:
import requests
from bs4 import BeautifulSoup
odpowiedz = requests.get("https://www.adidas.pl/search?q=ultraboost")
soup = BeautifulSoup(odpowiedz.text, 'html.parser')
You can use selenium instead of requests to get the page source您可以使用selenium代替 requests 来获取页面源
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
driver.get("https://www.adidas.pl/search?q=ultraboost")
source = driver.page_source
soup = BeautifulSoup(source, 'html.parser')
If you want to exit chrome after you got the page source use driver.quit()如果您想在获得页面源代码后退出 chrome,请使用 driver.quit()
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
driver.get("https://www.adidas.pl/search?q=ultraboost")
source = driver.page_source
driver.quit()
soup = BeautifulSoup(source, 'html.parser')
If you don't want the chrome tab to appear如果您不希望出现 chrome 选项卡
from selenium import webdriver
from bs4 import BeautifulSoup
options = webdriver.ChromeOptions()
options.add_argument('--headless')
driver = webdriver.Chrome(options=options)
driver.get("https://www.adidas.pl/search?q=ultraboost")
source = driver.page_source
driver.quit()
soup = BeautifulSoup(source, 'html.parser')
Daweo is right, the Adidas server checks the User-Agent
header. Daweo是对的,阿迪达斯服务器检查
User-Agent
头。
This works for me:这对我有用:
import requests
from bs4 import BeautifulSoup
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0",
#"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
#"Accept-Language": "en-US,en;q=0.5",
}
odpowiedz = requests.get("https://www.adidas.pl/search?q=ultraboost", headers=headers)
soup = BeautifulSoup(odpowiedz.text, 'html.parser')
It even accepts "aaaaaaaaaaaaaadaaaMozilla"
.它甚至接受
"aaaaaaaaaaaaaadaaaMozilla"
。
For Adidas.com, if you don't have an acceptable User-Agent
, it returns a page explaining why:对于 Adidas.com,如果您没有可接受的
User-Agent
,它会返回一个解释原因的页面:
During high-traffic product releases we have extra security in place to prevent bots entering our site.
在高流量产品发布期间,我们有额外的安全措施来防止机器人进入我们的网站。 We do this to protect customers and to give everyone a fair chance of getting the sneakers.
我们这样做是为了保护客户并让每个人都有公平的机会获得运动鞋。 Something in your setup must have triggered our security system, so we cannot allow you onto the site.
您的设置中的某些内容一定触发了我们的安全系统,因此我们无法允许您进入该站点。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.