简体   繁体   English

使用Python Selenium获取网站中表的内容

[英]Get content of table in website with Python Selenium

I am trying to get the content of a table on a website using selenium. 我正在尝试使用硒在网站上获取表的内容。 It seems the website is set up in a rather complex manner. 网站似乎以相当复杂的方式设置。 I can't find any element, class or content to use in the find_element_by_... functions. 我找不到在find_element_by_...函数中使用的任何元素,类或内容。

If anyone has idea how to get the content of the second table starting with header Staffel , Nr. 如果有人想知道如何从标头Staffel Nr.开始获取第二个表的内容Nr. , Datum , ... , Ergebnis , Bem. Datum...ErgebnisBem. it would be a big help for me. 对我来说将是很大的帮助。 I tried a lot (starting with urllib2, ...). 我尝试了很多(从urllib2开始,...)。 Principally the following scripts works - loading the site and looping through high level containers. 原则上,以下脚本有效-加载站点并循环访问高级容器。 But I am not sure how to get the mentioned table content. 但是我不确定如何获得上述表格内容。

from selenium import webdriver
from selenium.webdriver.common.by import By

the_url = 'https://www.hvw-online.org/spielbetrieb/ergebnissetabellen/#/league?ogId=3&lId=37133&allGames=1'

driver = webdriver.Chrome()
driver.get(the_url)

elem_high = driver.find_elements(By.CLASS_NAME, 'container')
for e in elem_high:
    print(e)

# what class or element to search for second table
elem_deep = driver.find_elements(By.CLASS_NAME, 'row.game')

driver.close()

Any ideas or comments are welcome. 欢迎任何想法或评论。 Thanks. 谢谢。

To get rows you have to wait for page load using WebDriverWait , you can find details here : 要获取行,您必须等待使用WebDriverWait进行页面加载,您可以在这里找到详细信息:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

the_url = 'https://www.hvw-online.org/spielbetrieb/ergebnissetabellen/#/league?ogId=3&lId=37133&allGames=1'

driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)

driver.get(the_url)

elem_deep = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "table.schedule tbody > tr")))
for e in elem_deep:
    print(e.text)
    # Link in last column
    href = e.find_element_by_css_selector("a[ng-if='row.game.sGID']").get_attribute("href")
    print(href)

But better solution is using requests package to get all information from website. 但是更好的解决方案是使用requests包从网站获取所有信息。 Code below is example how you can scrap much faster and easier: 下面的代码是示例,您可以更快,更轻松地进行报废:

import requests

url = 'https://spo.handball4all.de/service/if_g_json.php?ca=1&cl=37133&cmd=ps&og=3'
response = requests.get(url).json()

futureGames = response[0]["content"]["futureGames"]["games"]
for game in futureGames:
    print(game["gHomeTeam"])
    print(game["gGuestTeam"])
    # Link in last column
    print("http://spo.handball4all.de/misc/sboPublicReports.php?sGID=%s" % game["sGID"])

    # You can use example of data below to get all you need
    # {
    #     'gID': '2799428',
    #     'sGID': '671616',
    #     'gNo': '61330',
    #     'live': False,
    #     'gToken': '',
    #     'gAppid': '',
    #     'gDate': '30.09.18',
    #     'gWDay': 'So',
    #     'gTime': '14:00',
    #     'gGymnasiumID': '303',
    #     'gGymnasiumNo': '6037',
    #     'gGymnasiumName': 'Sporthalle beim Sportzentrum',
    #     'gGymnasiumPostal': '71229',
    #     'gGymnasiumTown': 'Leonberg',
    #     'gGymnasiumStreet': 'Steinstraße 18',
    #     'gHomeTeam': 'SV Leonb/Elt',
    #     'gGuestTeam': 'JSG Echaz-Erms 2',
    #     'gHomeGoals': '33',
    #     'gGuestGoals': '20',
    #     'gHomeGoals_1': '19',
    #     'gGuestGoals_1': '7',
    #     'gHomePoints': '2',
    #     'gGuestPoints': '0',
    #     'gComment': ' ',
    #     'gGroupsortTxt': ' ',
    #     'gReferee': ' '
    # }

You can use css class selector of 您可以使用CSS类选择器

.schedule

That is: 那是:

table = driver.find_element_by_css_selector(".schedule")

You may need a wait before. 您可能需要等待一段时间。

在此处输入图片说明

Then loop content 然后循环内容

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait 
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

driver = webdriver.Chrome()
url ='https://www.hvw-online.org/spielbetrieb/ergebnissetabellen/#/league?ogId=3&lId=37133&allGames=1'
driver.get(url)

table = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CSS_SELECTOR , '.schedule')))
headers = [elem.text for elem in driver.find_elements_by_css_selector('.schedule th')]
results = []
i = 1
for row in table.find_elements_by_css_selector('tr'):
    if i > 1:
        results.append([td.text for td in row.find_elements_by_css_selector('td')])
    i+=1
df = pd.DataFrame(results, columns = headers)
print(df)
driver.quit()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM