简体   繁体   English

即使元素存在,Selenium find_element 也会抛出异常

[英]Selenium find_element throwing exception even though the element exists

My code:我的代码:

soup = BeautifulSoup(driver.page_source,features="html.parser")
applications_domains = []

for card in soup.find_all("div", {"class":"ant-row"}):
    for url in card.find_all("a"):
    applications_domains.append(url.get("href"))

for aplications_domain in aplication_domains:
    try:
        WebDriverWait(driver,10).until(EC.presence_of_element_located((By.XPATH,"//a[@href='" + 
applications_domain + "']")))
        driver.find_element_by_xpath("//a[@href='" + applications_domain + "']").click()
    except:
        soup = BeautifulSoup(driver.page_source,features="html.parser")
        print(soup.find_all("a",{"href":applications_domain}))
        print(f"test error {applications_domain}")
        print("-----------------------")

I have an issue with find_element_by_xpath not finding the element even though it exists.我对find_element_by_xpath有一个问题,即使它存在也找不到元素。 I double checked using soup if indeed the element exists and it does as per output.如果确实存在该元素,我使用soup进行了双重检查,并且它按照 output 进行。

Output: Output:

<a href="applications_domain"><b></b></a>
test error applications_domain

I have a loop that goes through each application domain (contains data from each href ) however, it finds and clicks on the a href element most of the time but does not for some and I have no idea why.我有一个循环遍历每个应用程序域(包含来自每个href的数据)但是,它大多数时候会找到并单击a href元素,但对于某些人来说却没有,我不知道为什么。

Here is the site html.这是网站 html。 There are many div id="application_name_list" and each contain different a href that I need to click through有很多div id="application_name_list"并且每个都包含我需要单击的不同a href

<div class="ant-row" style="margin-left: -6px; margin-right: -6px;">
<div id="application_name_list" class="ant-col-8 dyff-home-app-search-result-item" style="padding-left: 6px; padding-right: 6px;">
    <a href="/dyfflaunch/domain/gco/app/di_data_customer_experience_conversation_processor/features">di_data_customer_experience_conversation_processor<b></b></a>
</div>
<div id="application_name_list" class="ant-col-8 dyff-home-app-search-result-item" style="padding-left: 6px; padding-right: 6px;">
    <a href="/dyfflaunch/domain/gco/app/di_kafka_configservice_agentqueuegroup_dim_v1-prod/features">di_kafka_configservice_agentqueuegroup_dim_v1-prod<b></b></a>
</div>
<div id="application_name_list" class="ant-col-8 dyff-home-app-search-result-item" style="padding-left: 6px; padding-right: 6px;">
    <a href="/dyfflaunch/domain/gco/app/di_kafka_configservice_phoneinventory_dim_v1-prod/features">di_kafka_configservice_phoneinventory_dim_v1-prod<b></b></a>
</div>
</div>
enter code here

This is one, pretty generic way of doing it:这是一种非常通用的方法:

a_tags=driver.find_elements_by_xpath("//div[@id='application_name_list']//a")

for a_tag in a_tags:
    a_tag.click()

If you have examples where this doesn't work, please add one to the question.如果您有示例不起作用,请在问题中添加一个。

I would suggest use WebDriverWait() and wait for visibility_of_all_elements_located () and then use following css selector to click.我建议使用 WebDriverWait() 并等待visibility_of_all_elements_located () 然后使用以下 css 选择器单击。

driver.get("url here")
WebDriverWait(driver,10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".ant-row")))
for link in driver.find_elements_by_css_selector(".ant-row>#application_name_list>a[href]"):
    link.click()

If you want to use beautiful soup and selenium to do that then try this one.如果你想用漂亮的汤和 selenium 来做到这一点,那就试试这个。

driver.get("url here")
WebDriverWait(driver,10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".ant-row")))

soup = BeautifulSoup(driver.page_source,features="html.parser")
applications_domains = []

for url in soup.select(".ant-row>#application_name_list>a[href]"):
    applications_domains.append(url['href'])

for applications_domain in applications_domains:
    try:
        WebDriverWait(driver,10).until(EC.visibility_of_element_located((By.XPATH,"//a[@href='" + applications_domain + "']")))
        driver.find_element_by_xpath("//a[@href='" + applications_domain + "']").click()
    except:
        soup = BeautifulSoup(driver.page_source,features="html.parser")
        print(soup.find_all("a",{"href":applications_domain}))
        print("test error {applications_domain}")
        print("-----------------------")

The issue was caused by overlapping and solved as per Solution Error message returned was selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point but due to my poor knowledge of error handling the error was not shown as expected.该问题是由重叠引起的,并按照解决方案返回的错误消息为selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point但由于我对错误处理的了解不足,错误未显示为预期的。 Thank you all for help!谢谢大家的帮助!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM