[英]Selenium providing 'Errno13' but no location of error
I am trying to develop a web scraper using Python, Beautiful Soup, and Selenium that can peruse the steam community marketplace.我正在尝试使用 Python、Beautiful Soup 和 Selenium 开发一个 web 刮刀,可以细读蒸汽社区市场。
import requests
from bs4 import BeautifulSoup
import time
import selenium
from selenium import webdriver
import chromedriver_binary
driver = webdriver.Chrome("")
steam_market_URL = 'https://steamcommunity.com/market/search?q=&category_730_ItemSet%5B%5D=any&category_730_ProPlayer%5B%5D=any&category_730_StickerCapsule%5B%5D=any&category_730_TournamentTeam%5B%5D=any&category_730_Weapon%5B%5D=any&appid=730#p1_popular_desc'
driver.get(steam_market_URL)
for pageNum in range(1,6):
steam_market_HTML = requests.get(steam_market_HTML).text
HTML_parser = BeautifulSoup(steam_market_HTML, 'html.parser')
popular_steam_items = HTML_parser.findAll(attrs = {"class" : "market_listing_searchresult"})
popular_steam_items_URL = HTML_parser.findAll(attrs={"class" : "market_listing_row_link"})
for item in range(0,len(popular_steam_items)):
print(popular_steam_items[item]["data-hash-name"] + " " + popular_steam_items_URL[item]["href"] + "\n")
driver.find_element_by_id_name("searchResults_btn_next").click()
time.sleep(.5)
In theory, this should navigate through the first five pages of the steam "popular items" list and add names of each item + the URL for that item, waiting.5 seconds between each page switch (if I biffed here and my code won't work please let me know.).从理论上讲,这应该浏览蒸汽“流行项目”列表的前五页,并添加每个项目的名称 + 该项目的 URL,在每个页面切换之间等待 5 秒(如果我在这里折腾并且我的代码不会' t工作请告诉我。)。
However, after running the code I am faced with this error:但是,在运行代码后,我遇到了这个错误:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/service.py", line 72, in start
self.process = subprocess.Popen(cmd, env=self.env,
File "/usr/lib/python3.8/subprocess.py", line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.8/subprocess.py", line 1702, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "seleniumTest.py", line 7, in <module>
driver = webdriver.Chrome("")
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/chrome/webdriver.py", line 73, in __init__
self.service.start()
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/service.py", line 86, in start
raise WebDriverException(
selenium.common.exceptions.WebDriverException: Message: '' executable may have wrong permissions. Please see https://sites.google.com/a/chromium.org/chromedriver/home
Most other versions of this error that I have seen on SE provide a location after '[Errno 13] Permission denied:' and I'm a little bit lost on what to change here.我在 SE 上看到的大多数其他版本的此错误都在“[Errno 13] Permission denied:”之后提供了一个位置,我对这里的更改有点迷茫。 Any help would be greatly appreciated!
任何帮助将不胜感激! Thanks!
谢谢!
Your code have 3 bugs您的代码有 3 个错误
first do not put "" inside parentheses首先不要将“”放在括号内
driver = webdriver.Chrome()
second第二
steam_market_HTML = requests.get(steam_market_URL).text
Third name and id are two different attribute of WebDriver so you can call one at a time in your case it is ID第三个名称和 id 是 WebDriver 的两个不同属性,因此您可以一次调用一个,在您的情况下它是 ID
driver.find_element_by_id("searchResults_btn_next").click()
It will be the correct code I hope我希望这将是正确的代码
import requests
from bs4 import BeautifulSoup
import time
import selenium
from selenium import webdriver
import chromedriver_binary
driver = webdriver.Chrome()
steam_market_URL = 'https://steamcommunity.com/market/search?q=&category_730_ItemSet%5B%5D=any&category_730_ProPlayer%5B%5D=any&category_730_StickerCapsule%5B%5D=any&category_730_TournamentTeam%5B%5D=any&category_730_Weapon%5B%5D=any&appid=730#p1_popular_desc'
driver.get(steam_market_URL)
for pageNum in range(1,6):
steam_market_HTML = requests.get(steam_market_URL).text
HTML_parser = BeautifulSoup(steam_market_HTML, 'html.parser')
popular_steam_items = HTML_parser.findAll(attrs = {"class" : "market_listing_searchresult"})
popular_steam_items_URL = HTML_parser.findAll(attrs={"class" : "market_listing_row_link"})
for item in range(0,len(popular_steam_items)):
print(popular_steam_items[item]["data-hash-name"] + " " + popular_steam_items_URL[item]["href"] + "\n")
driver.find_element_by_id("searchResults_btn_next").click()
time.sleep(.5)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.