简体   繁体   中英

How to extract a table from the website using python?

I wrote a code to extract table from this website ( http://www.nhb.gov.in/OnlineClient/MonthlyPriceAndArrivalReport.aspx ), but I am unable to do so.

from selenium import webdriver 
import time, re
from selenium.webdriver.support.ui import Select
from bs4 import BeautifulSoup
import pandas as pd
from selenium import webdriver
import time

chrome_path = r"C:\Users\user\Desktop\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)

driver.get("http://www.nhb.gov.in/OnlineClient/MonthlyPriceAndArrivalReport.aspx")

html_source = driver.page_source
results=[]

#cauliflower
element_month = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_ddlmonth")
drp_month = Select(element_month)
drp_month.select_by_visible_text("January")

element_category_name = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_drpCategoryName")
drp_category_name = Select(element_category_name)
drp_category_name.select_by_visible_text("VEGETABLES")

time.sleep(2)
element_crop_name = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_drpCropName")
drp_crop_name = Select(element_crop_name)
drp_crop_name.select_by_value("117")
time.sleep(2)
element_variety_name = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_ddlvariety")
drp_variety_name = Select(element_variety_name)
drp_variety_name.select_by_value("18")

element_state = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_LsboxCenterList")
drp_state = Select(element_state)
drp_state.select_by_visible_text("AHMEDABAD")

driver.find_element_by_xpath("""//*[@id="ctl00_ContentPlaceHolder1_btnSearch"]""").click()

soup = BeautifulSoup(driver.page_source, 'html.parser')
table = pd.read_html(driver.page_source)[3]
#number three is arbitrary. I tried all numbers from 1 to 6 and python did not recognize the table at 
#the bottom of the screen. 
print(len(table))
print(table)
with pd.ExcelWriter(r'C:\Users\user\Desktop\python.xlsx') as writer:
 table.to_excel(writer, sheet_name = "cauliflower", index=False) # cauliflower results on sheet named 
 cauliflower
 writer.save()  

Can you please help me figure out how to extract the table at the bottom of the website. Your help will be appreciated. Thank you in advance.

You can do that without using Beautiful soup. After click on search button.

Induce WebDriverWait () and wait for visibility_of_element_located () Get the table element using get_attribute('outerHTML')

Then use pd.read_html(str(tableelement))[0] and print(table)

Rest you can do that to import in excel or csv.

Code :

driver.find_element_by_xpath("//*[@id='ctl00_ContentPlaceHolder1_btnSearch']").click()
tableelement=WebDriverWait(driver,10).until(EC.visibility_of_element_located((By.CSS_SELECTOR,"table#ctl00_ContentPlaceHolder1_GridViewmonthlypriceandarrivalreport"))).get_attribute('outerHTML')
table = pd.read_html(str(tableelement))[0]
print(table)

You need to import below libraries.

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

If you want to use BeautifulSoup as well then try this code.

driver.find_element_by_xpath("//*[@id='ctl00_ContentPlaceHolder1_btnSearch']").click()
WebDriverWait(driver,10).until(EC.visibility_of_element_located((By.CSS_SELECTOR,"table#ctl00_ContentPlaceHolder1_GridViewmonthlypriceandarrivalreport")))
soup = BeautifulSoup(driver.page_source, 'html.parser')
table = pd.read_html(str(soup))[-1]
print(table)

Output :

  S.No.            CenterName  ...         Day30         Day31
0    1.0  AHMEDABAD / अहमदाबाद  ...  1.002502e+15  2.005004e+15
1    NaN                   NaN  ...           NaN           NaN

[2 rows x 35 columns]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM