[英]Update on Using Selenium To Scrape Java Heavy Websites in Python
我的第一段代码看起来有点像这样,
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://explorer.helium.com/accounts/13pm9juR7WPjAf7EVWgq5EQAaRTppu2EE7ReuEL9jpkHQMJCjn9")
earnings = driver.find_elements_by_class_name('text-base text-gray-600 mb-1 tracking-tight w-full break-all')
print(earnings)
driver.quit()
现在我已经到了添加等待时间的地步,但是这里的代码仍然没有返回。
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(ChromeDriverManager().install())
driver = webdriver.Chrome()
driver.get("https://explorer.helium.com/accounts/13pm9juR7WPjAf7EVWgq5EQAaRTppu2EE7ReuEL9jpkHQMJCjn9")
try:
element = WebDriverWait(driver, 60).until(EC.presence_of_element_located((By.XPATH, ".//*[@id='app']/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]")))
finally:
earnings = driver.find_elements_by_xpath('.//*[@id="app"]/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]')
print(earnings)
print("loaded")
driver.quit()
我只是想在这个容器Image of Container 中抓取带有美元金额的文本
希望能在我遇到的这个问题上得到进一步的帮助。
正如已经解释过的, find_elements
将返回List of WebElements
并访问那些您可以使用索引 - earnings[0]
find_elements
。
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get("https://explorer.helium.com/accounts/13pm9juR7WPjAf7EVWgq5EQAaRTppu2EE7ReuEL9jpkHQMJCjn9")
try:
element = WebDriverWait(driver, 60).until(EC.presence_of_element_located((By.XPATH, ".//*[@id='app']/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]")))
finally:
earnings = driver.find_elements_by_xpath('.//*[@id="app"]/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]')
print(earnings[0].text)
print("loaded")
driver.quit()
find_element
将返回一个WebElement
并访问相同的内容:
try:
element = WebDriverWait(driver, 60).until(EC.presence_of_element_located((By.XPATH, ".//*[@id='app']/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]")))
finally:
earnings = driver.find_element_by_xpath('.//*[@id="app"]/article/div[2]/div/div[2]/div/div[2]/div[3]/div[1]/div[1]/div[3]')
print(earnings.text)
print("loaded")
对于这两个输出是:
$14.08
loaded
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.