[英]Update on Using Selenium To Scrape Java Heavy Websites in Python
我的第一段代碼看起來有點像這樣,
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://explorer.helium.com/accounts/13pm9juR7WPjAf7EVWgq5EQAaRTppu2EE7ReuEL9jpkHQMJCjn9")
earnings = driver.find_elements_by_class_name('text-base text-gray-600 mb-1 tracking-tight w-full break-all')
print(earnings)
driver.quit()
現在我已經到了添加等待時間的地步,但是這里的代碼仍然沒有返回。
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(ChromeDriverManager().install())
driver = webdriver.Chrome()
driver.get("https://explorer.helium.com/accounts/13pm9juR7WPjAf7EVWgq5EQAaRTppu2EE7ReuEL9jpkHQMJCjn9")
try:
element = WebDriverWait(driver, 60).until(EC.presence_of_element_located((By.XPATH, ".//*[@id='app']/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]")))
finally:
earnings = driver.find_elements_by_xpath('.//*[@id="app"]/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]')
print(earnings)
print("loaded")
driver.quit()
我只是想在這個容器Image of Container 中抓取帶有美元金額的文本
希望能在我遇到的這個問題上得到進一步的幫助。
正如已經解釋過的, find_elements
將返回List of WebElements
並訪問那些您可以使用索引 - earnings[0]
find_elements
。
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get("https://explorer.helium.com/accounts/13pm9juR7WPjAf7EVWgq5EQAaRTppu2EE7ReuEL9jpkHQMJCjn9")
try:
element = WebDriverWait(driver, 60).until(EC.presence_of_element_located((By.XPATH, ".//*[@id='app']/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]")))
finally:
earnings = driver.find_elements_by_xpath('.//*[@id="app"]/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]')
print(earnings[0].text)
print("loaded")
driver.quit()
find_element
將返回一個WebElement
並訪問相同的內容:
try:
element = WebDriverWait(driver, 60).until(EC.presence_of_element_located((By.XPATH, ".//*[@id='app']/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]")))
finally:
earnings = driver.find_element_by_xpath('.//*[@id="app"]/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]')
print(earnings.text)
print("loaded")
對於這兩個輸出是:
$14.08
loaded
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.