简体   繁体   中英

I like to get table text from <tr> <td> using selenium

This is the trial I tried so far.

from urllib.request import urlopen
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException

url ='http://marketdata.krx.co.kr/mdi#document=080120&547c5e15ef32e37dc099b89d69ac8970-[object%20HTMLDivElement]=1&547c5e15ef32e37dc099b89d69ac8970-[object%20HTMLDivElement]=2&547c5e15ef32e37dc099b89d69ac8970-[object%20HTMLDivElement]=1&547c5e15ef32e37dc099b89d69ac8970-object%20HTMLDivElement]=1'

driver = webdriver.Chrome()
driver.get(url)
element = driver.find_element_by_xpath('//select[@name="upclss"]')
all_options = element.find_elements_by_tag_name("option")
for option in all_options :
if option.text == "원자재":
    option.click()
    driver.implicitly_wait(5)
    another = driver.find_element_by_xpath('//li[@class="active"]')
    another.click()
    driver.implicitly_wait(5)
    html = driver.page_source
    soup = BeautifulSoup(html, "html.parser")
    table = soup.findChildren('table')[0]
    rows = table.findChildren('tr')
    for row in rows:
        cells = row.findChildren('td')
        for cell in cells:
            cell_content = cell.getText()
            print(cell_content)

What shall I do more to get below table contents from the above url and print it? Many thanks!!

Why don't you get it from page source? I know you're using python but in Java I would solve this in this way:

I would handle the page source as a String and get a substring which starts with <table> and ends whith </table> or whatever you want...

From this I would extract my wanted values in the same way - Building a substring starting with the <td>- tag and ending with the </td>- tag.

The remaining text is the table data text you see on the web page.

The output(value) of the html = driver.page_source will help, but I assume this will work as well:

from urllib.request import urlopen
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException

url ='http://marketdata.krx.co.kr/mdi#document=080120&547c5e15ef32e37dc099b89d69ac8970-[object%20HTMLDivElement]=1&547c5e15ef32e37dc099b89d69ac8970-[object%20HTMLDivElement]=2&547c5e15ef32e37dc099b89d69ac8970-[object%20HTMLDivElement]=1&547c5e15ef32e37dc099b89d69ac8970-object%20HTMLDivElement]=1'

driver = webdriver.Chrome()
driver.get(url)
element = driver.find_element_by_xpath('//select[@name="upclss"]')
all_options = element.find_elements_by_tag_name("option")
for option in all_options :
    if option.text == "원자재":
        option.click()
        driver.implicitly_wait(5)
        another = driver.find_element_by_xpath('//li[@class="active"]')
        another.click()
        driver.implicitly_wait(5)                       
        tds = driver.find_element_by_xpath("//table/tr/td")
        for td in tds :
            print(td.text)

Finally, it was solved within selenium, not though soup...

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
import time

url = '...'
element = driver.find_element_by_xpath('//select[@name="upclss"]')
all_options = element.find_elements_by_tag_name("option")
for option in all_options :
    print(option.text)
    option.click()
    driver.implicitly_wait(5)
    another = driver.find_element_by_xpath('//li[@class="active"]')
    another.click()
    time.sleep(5)
    header = driver.find_element_by_xpath('//table[@class="CI-GRID-HEADER-TABLE"]').text
    other = driver.find_element_by_xpath('//table[@class="CI-GRID-BODY-TABLE"]').text
    print(header)
    print(other)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM