[英]How do I loop through URLs with Selenium for scraping
The following is the code I have so far to scrape from https://n.rivals.com/state_rankings/2021/alabama .以下是我目前从https://n.rivals.com/state_rankings/2021/alabama抓取的代码。 I want the code to loop through the address with replacing all the states where alabama is.
我希望代码通过替换阿拉巴马州所在的所有州来遍历地址。 Ideally, I would also like to be able to change the year for future use.
理想情况下,我还希望能够更改年份以供将来使用。 What am I doing wrong with how i defined url and year/state1?
我在定义 url 和 year/state1 时做错了什么?
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
TIMEOUT = 5
driver = webdriver.Firefox()
driver.set_page_load_timeout(TIMEOUT)
url = f"https://n.rivals.com/state_rankings/{year}/{state1}"
year = "2021"
state1 = "alabama"
try:
driver.get(url)
except TimeoutException:
pass
first_names = driver.find_elements_by_class_name('first-name')
first_names = [name.text for name in first_names]
last_names = driver.find_elements_by_class_name('last-name')
last_names = [name.text for name in last_names]
for first, last in zip(first_names, last_names):
print(first, last)
player_positions = driver.find_elements_by_class_name('pos')
player_positions = [position.text for position in player_positions]
for position in player_positions:
print(position)
data = driver.find_elements_by_xpath('//div[@class="break-text ng-binding ng-scope"]')
for d in data:
location, highschool = d.text.strip().split('\n')
city, state = location.split(',')
print(city)
print(state)
print(highschool)
commit_status = driver.find_elements_by_class_name('school-name')
commit_status = [commit.text for commit in commit_status]
for commit in commit_status:
print(commit)
driver.close()
You have to create the variables before you refer to them like so:您必须在引用变量之前创建变量,如下所示:
year = "2021"
state1 = "alabama"
url = f"https://n.rivals.com/state_rankings/{year}/{state1}"
Then to make it loop for many states, you would do the following:然后要使其在许多状态下循环,您可以执行以下操作:
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
TIMEOUT = 5
driver = webdriver.Firefox()
driver.set_page_load_timeout(TIMEOUT)
def rivals_scrape(state, year):
url = f"https://n.rivals.com/state_rankings/{year}/{state}"
try:
driver.get(url)
except TimeoutException:
pass
first_names = driver.find_elements_by_class_name('first-name')
... rest of code ...
for commit in commit_status:
print(commit)
states = ["alabama", "georgia","texas"]
for state in states:
rivals_scrape(state, "2021")
driver.close()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.