简体   繁体   中英

get all links on a page with Python Selenium 4.1.0

I would like to find and visit all the links on a page using Python Selenium. I am getting the following error.

Traceback (most recent call last): File "C:\Users\Acer\PycharmProjects\selenium-rpa\main.py", line 24, in print(elem.get_attribute("href")) AttributeError: 'str' object has no attribute 'get_attribute'. Did you mean: ' getattribute '?

My code:

from selenium import webdriver
from datetime import datetime
import requests, urllib3
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.service import Service

PATH = Service("C:\chromedriver.exe")
url = "http://localhost/rpa_anomaly/test.php"
browser = webdriver.Chrome(service=PATH)
browser.get(url)

elems = browser.find_element(By.XPATH, "//a[@href]")
for elem in elems:
    print(elem.get_attribute("href"))

same problem here but my selenium version is newer. so I can't use it like below

elems = driver.find_elements_by_xpath("//a[@href]")
for elem in elems:
    print(elem.get_attribute("href"))

Fetch all href link using selenium in python

You can try with regular method of tagname

from selenium import webdriver
from webdriver_manager.microsoft import EdgeChromiumDriverManager

driver = webdriver.Edge(EdgeChromiumDriverManager().install())
driver.get("http://localhost/rpa_anomaly/test.php")
# identify elements with tagname <a>
lnks = driver.find_elements_by_tag_name("a")
# traverse list
for lnk in lnks:

    print(lnk.get_attribute("href"))
driver.quit()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM