[英]Python selenium code is not working
Why the looping is not working after opening the first element as per the xpath ?为什么按照 xpath 打开第一个元素后循环不起作用? And i am getting the below exception
我收到以下异常
raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element:{"method":"xpath","selector":"//[@id='searchresults']/tbody/tr[2]/td[1]"} Stacktrace: at FirefoxDriver.prototype.findElementInternal_ (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/driver-component.js:10723) at FirefoxDriver.prototype.findElement (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/driver-component.js:10732) at DelayedCommand.prototype.executeInternal_/h (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/command-processor.js:12614) at DelayedCommand.prototype.executeInternal_ (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/command-processor.js:12619) at DelayedCommand.prototype.execute/< (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/command-processor.js:12561)
Code:代码:
from selenium import webdriver
from texttable import len
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
driver=webdriver.Firefox()
driver.get('https://jobs.ericsson.com/search/')
driver.maximize_window()
driver.find_element_by_css_selector('[type="text"][id="keywordsearch-q"]').send_keys('Python')
driver.find_element_by_css_selector('[class="btn"][type="submit"]').click()
i=len("//*[@id='searchresults']/tbody/tr/td")
for j in range(1,i+1):
driver.find_element_by_xpath("//*[@id='searchresults']/tbody/tr[%d]/td[1]"%j).click()
print driver.find_element_by_id("job-title").text
driver.back()
continue
Question 2: Why the length of the list is getting displayed as 12 but they have only 5 herf elements in it ?问题 2:为什么列表的长度显示为 12,但其中只有 5 个 herf 元素?
from selenium import webdriver
from texttable import len
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
driver=webdriver.Firefox()
driver.delete_all_cookies()
driver.get('https://jobs.ericsson.com/search/')
driver.maximize_window()
driver.find_element_by_css_selector('[type="text"][id="keywordsearch-q"]').send_keys('Python')
driver.find_element_by_css_selector('[class="btn"][type="submit"]').click()
#currenturl = driver.current_url
pages=driver.find_elements_by_css_selector('a[rel="nofollow"]')
print pages
print 'Its working'
pages1=[]
for page1 in pages:
pages1.append(page1.get_attribute('href'))
print int(len(pages1))
Question 3: How to get the elements under html tags问题三:如何获取html标签下的元素
a.How to get the 1-25 and 104 separately under the b tag ? a.如何分别获取b标签下的1-25和104?
Please refer the URL: https://jobs.ericsson.com/search/?q=Python (result section getting displayed at the bottom of the page)请参考 URL: https : //jobs.ericsson.com/search/?q=Python (结果部分显示在页面底部)
<div class="paginationShell clearfix" lang="en_US" xml:lang="en_US">
<div class="pagination well well-small">
<span class="pagination-label-row">
<span class="paginationLabel">
Results
<b>1 – 25</b>
of
<b>104</b>
</span>
b.How to get the Job id from the html ? b.如何从 html 中获取 Job id?
<div class="job">
<span itemprop="description">
<b>Req ID:</b>
128378
<br/>
<br/>
Please try following:请尝试以下操作:
for job in range(len(driver.find_elements_by_class_name('jobTitle-link'))):
driver.implicitly_wait(5)
driver.find_elements_by_class_name('jobTitle-link')[job].click()
print driver.find_element_by_id("job-title").text
driver.back()
This may or may not help you, but based on my own experience, I typically run into this error when my page hasn't fully loaded.这可能对您有帮助,也可能无济于事,但根据我自己的经验,当我的页面未完全加载时,我通常会遇到此错误。 Adding a
time.sleep(1)
before searching for the element usually fixes the problem for me (if the code is correct).在搜索元素之前添加
time.sleep(1)
通常可以为我解决问题(如果代码正确)。
import time
#Skip your other code
for j in range(1,i+1):
time.sleep(1)
driver.find_element_by_xpath("//*[@id='searchresults']/tbody/tr[%d]/td[1]"%j).click()
print driver.find_element_by_id("job-title").text
driver.back()
continue
Here is a solution that works, the idea is not to click on each link rather store the url in a list and then navigate to it:这是一个有效的解决方案,其想法不是单击每个链接,而是将 url 存储在列表中,然后导航到它:
from selenium import webdriver
from texttable import len
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
driver=webdriver.Firefox()
driver.get('https://jobs.ericsson.com/search/')
driver.maximize_window()
driver.find_element_by_css_selector('[type="text"][id="keywordsearch-q"]').send_keys('Python')
driver.find_element_by_css_selector('[class="btn"][type="submit"]').click()
#To further process preserve the current url
currenturl = driver.current_url
#Get all the elements by class name
jobs = driver.find_elements_by_class_name('jobTitle-link')
jobslink = []
#Get hyperlink urls from the jobs elements
#This way we avoid clicking each link and going back to the previous page
for job in jobs:
jobslink.append(job.get_attribute('href'))
#Get each element page
for job in jobslink:
driver.get(job)
print driver.find_element_by_id("job-title").text
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.