簡體   English   中英

如何使用 python 從網站中提取表格?

[英]How to extract a table from the website using python?

我寫了一個代碼從這個網站( http://www.nhb.gov.in/OnlineClient/MonthlyPriceAndArrivalReport.aspx )中提取表格,但我無法這樣做。

from selenium import webdriver 
import time, re
from selenium.webdriver.support.ui import Select
from bs4 import BeautifulSoup
import pandas as pd
from selenium import webdriver
import time

chrome_path = r"C:\Users\user\Desktop\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)

driver.get("http://www.nhb.gov.in/OnlineClient/MonthlyPriceAndArrivalReport.aspx")

html_source = driver.page_source
results=[]

#cauliflower
element_month = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_ddlmonth")
drp_month = Select(element_month)
drp_month.select_by_visible_text("January")

element_category_name = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_drpCategoryName")
drp_category_name = Select(element_category_name)
drp_category_name.select_by_visible_text("VEGETABLES")

time.sleep(2)
element_crop_name = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_drpCropName")
drp_crop_name = Select(element_crop_name)
drp_crop_name.select_by_value("117")
time.sleep(2)
element_variety_name = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_ddlvariety")
drp_variety_name = Select(element_variety_name)
drp_variety_name.select_by_value("18")

element_state = driver.find_element_by_id ("ctl00_ContentPlaceHolder1_LsboxCenterList")
drp_state = Select(element_state)
drp_state.select_by_visible_text("AHMEDABAD")

driver.find_element_by_xpath("""//*[@id="ctl00_ContentPlaceHolder1_btnSearch"]""").click()

soup = BeautifulSoup(driver.page_source, 'html.parser')
table = pd.read_html(driver.page_source)[3]
#number three is arbitrary. I tried all numbers from 1 to 6 and python did not recognize the table at 
#the bottom of the screen. 
print(len(table))
print(table)
with pd.ExcelWriter(r'C:\Users\user\Desktop\python.xlsx') as writer:
 table.to_excel(writer, sheet_name = "cauliflower", index=False) # cauliflower results on sheet named 
 cauliflower
 writer.save()  

你能幫我弄清楚如何提取網站底部的表格嗎? 您的幫助將不勝感激。 先感謝您。

你可以在不使用美麗湯的情況下做到這一點。 點擊搜索按鈕后。

誘導WebDriverWait () 並等待visibility_of_element_located () 使用get_attribute('outerHTML')獲取表格元素

然后使用pd.read_html(str(tableelement))[0]print(table)

Rest 您可以這樣做以導入 excel 或 csv。

代碼

driver.find_element_by_xpath("//*[@id='ctl00_ContentPlaceHolder1_btnSearch']").click()
tableelement=WebDriverWait(driver,10).until(EC.visibility_of_element_located((By.CSS_SELECTOR,"table#ctl00_ContentPlaceHolder1_GridViewmonthlypriceandarrivalreport"))).get_attribute('outerHTML')
table = pd.read_html(str(tableelement))[0]
print(table)

您需要導入以下庫。

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

如果您也想使用BeautifulSoup ,請嘗試此代碼。

driver.find_element_by_xpath("//*[@id='ctl00_ContentPlaceHolder1_btnSearch']").click()
WebDriverWait(driver,10).until(EC.visibility_of_element_located((By.CSS_SELECTOR,"table#ctl00_ContentPlaceHolder1_GridViewmonthlypriceandarrivalreport")))
soup = BeautifulSoup(driver.page_source, 'html.parser')
table = pd.read_html(str(soup))[-1]
print(table)

Output

  S.No.            CenterName  ...         Day30         Day31
0    1.0  AHMEDABAD / अहमदाबाद  ...  1.002502e+15  2.005004e+15
1    NaN                   NaN  ...           NaN           NaN

[2 rows x 35 columns]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM