简体   繁体   中英

web scraping of website in python

On the below mentioned website, When I select date as 27 jun-2017 and Series/Run rates as "USD RATES 1100". After submitting it, rates opens below on that page. Till this point I am able to do it programitically. But I need 10 year rate(answer is 2.17) of above mentioned date and rate combination. Can some one please tell me what error I am making in the last line of the code.

https://www.theice.com/marketdata/reports/180

from selenium import webdriver
chrome_path = r"C:\Users\vick\Desktop\python_1\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get("https://www.theice.com/marketdata/reports/180")
try: 
   driver.find_element_by_xpath('/html/body/div[3]/div/div[2]/div/div/
   div[2]/button').click()
except:
      pass

driver.find_element_by_xpath('//*
[@id="seriesNameAndRunCode_chosen"]/a/span').click()
driver.find_element_by_xpath('//*
[@id="seriesNameAndRunCode_chosen"]/div/ul/li[5]').click()
driver.find_element_by_xpath('//*[@id="reportDate"]').clear()
driver.find_element_by_xpath('//*[@id="reportDate"]').send_keys("27-Jul-
2017") 
driver.find_element_by_xpath('//*[@id="selectForm"]/input').click()
driver.execute_script("window.scrollTo(0, document.body.scrollHeight)/2;")
print(driver.find_element_by_xpath('//*[@id="report-
content"]/div/div/table/tbody/tr[10]/td[2]').get_attribute('innerHTML'))

Error I am getting in last line: NoSuchElementException: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="report-content"]/div/div/table/tbody/tr[10]/td[2]"}

Thankyou for the help

You have to wait a second or two when you click the input field. Like:

from selenium import webdriver
chrome_path = r"C:\Users\vick\Desktop\python_1\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get("https://www.theice.com/marketdata/reports/180")
try: 
   driver.find_element_by_xpath('/html/body/div[3]/div/div[2]/div/div/div[2]/button').click()
except:
      pass

driver.find_element_by_xpath('//*[@id="seriesNameAndRunCode_chosen"]/a/span').click()
driver.find_element_by_xpath('//*[@id="seriesNameAndRunCode_chosen"]/div/ul/li[5]').click()
driver.find_element_by_xpath('//*[@id="reportDate"]').clear()
driver.find_element_by_xpath('//*[@id="reportDate"]').send_keys("27-Jul-2017") 
driver.find_element_by_xpath('//*[@id="selectForm"]/input').click()
driver.execute_script("window.scrollTo(0, document.body.scrollHeight)/2;")
time.sleep(2) #here is the part where you should wait. 
print(driver.find_element_by_xpath('//*[@id="report-content"]/div/div/table/tbody/tr[10]/td[2]').get_attribute('innerHTML'))

Option B is to wait until the element has been loaded:

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import TimeoutException

....
driver.execute_script("window.scrollTo(0,document.body.scrollHeight)/2;")
timeout = 5
try:
    element_present = EC.presence_of_element_located((By.ID, 'report-content'))
    WebDriverWait(driver, timeout).until(element_present)
except TimeoutException:
    print "Timed out waiting for page to load"
......
print(driver.find_element_by_xpath('//*[@id="report-content"]/div/div/table/tbody/tr[10]/td[2]').get_attribute('innerHTML'))

In the first case Python waits 2 seconds and than continues. In the second case the Webdriver waits until the element is loaded (for maximal 5 seconds)

Tried the code and it works. Hope that helped.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM