简体   繁体   English

使用 selenium 和 bs4 进行网页抓取

[英]Web scraping using selenium and bs4

I'm trying to build a dataframe based on web scraping of that page我正在尝试基于该页面的网络抓取来构建数据框

https://www.schoolholidayseurope.eu/choose-a-country https://www.schoolholidayseurope.eu/choose-a-country

html firstable i said to selenium to click on page of my choice then i put xpath and tags elements for build header and body but i don't have the format that i desired my element is NaN or duplicates. html firstable 我说 selenium 单击我选择的页面,然后我将 xpath 和 tags 元素放在构建标题和正文中,但我没有我想要的格式,我的元素是 NaN 或重复。

Following my script :按照我的脚本:

def get_browser(url_selector):
    """Get the browser (a "driver")."""
    #option = webdriver.ChromeOptions()
    #option.add_argument(' — incognito')
    path_to_chromedriver = r"C:/Users/xxxxx/Downloads/chromedriver_win32/chromedriver.exe"
    browser = webdriver.Chrome(executable_path= path_to_chromedriver)
    browser.get(url_selector)
    
    """ Try with Italie"""
    browser.find_element_by_xpath(italie_buton_xpath).click()

    """ Raise exception : down browser if loading take more than 45sec : timer is the logo website as a flag"""
    # Wait 45 seconds for page to load
    timeout = 45
    try:
        WebDriverWait(browser, timeout).until(EC.visibility_of_element_located((By.XPATH, '//*[@id="s5_logo_wrap"]/img')))
    except TimeoutException:
        print("Timed out waiting for page to load")
        browser.quit()
    return browser

browser = get_browser(url_selector)
headers = browser.find_element_by_xpath('//*[@id="s5_component_wrap_inner"]/main/div[2]/div[2]/div[3]/table/thead').find_elements_by_tag_name('tr')                                                            
headings = [i.text.strip() for i in headers]
bs_obj = BeautifulSoup(browser.page_source, 'html.parser')
rows = bs_obj.find_all('table')[0].find('tbody').find_all('tr')[1:]
table = []

for row in rows : 
    line = next(td.get_text() for td in row.find_all("td"))
    print(line)
    table.append(line)
browser.quit()
    
pd.DataFrame(line, columns = headings)

it returns它返回

a one column dataframe like :一列数据框,如:

    School Holiday Region Start date End date Week
0   Easter holidays 2018
1   REMARK: Small differences by region are possi...
2   Summer holiday 2018
3   REMARK: First region through to last region.
4   Christmas holiday 2018

there's three issue there i don't want REMARK rows and school holiday start-date and end-date are taken as separated word and the whole dataframe is unsplitted.有三个问题,我不希望 REMARK 行和学校假期开始日期和结束日期被视为单独的单词,并且整个数据框未拆分。

If i split my headings and line the shape of both mismatch due to REMARKS rows i got 9 elements in my list instead of 3 and due to separated words i got 8 elements instead of 5 in heading.如果我拆分我的标题并排列由于 REMARKS 行而导致的两个不匹配的形状,我的列表中有 9 个元素而不是 3 个,并且由于单词分开,我在标题中得到 8 个元素而不是 5 个元素。

You can find all the links on the main page, and then iterate over each url with selenium :您可以在主页上找到所有链接,然后使用selenium遍历每个 url:

from selenium import webdriver
from bs4 import BeautifulSoup as soup
import re, contextlib, pandas
d = webdriver.Chrome('/Users/jamespetullo/Downloads/chromedriver')
d.get('https://www.schoolholidayseurope.eu/choose-a-country')
_, *countries = [(lambda x:[x.text, x['href']])(i.find('a')) for i in soup(d.page_source, 'html.parser').find_all('li', {'class':re.compile('item\d+$')})]
@contextlib.contextmanager
def get_table(source:str):
   yield [[[i.text for i in c.find_all('th')], [i.text for i in c.find_all('td')]] for c in soup(source, 'html.parser').find('table', {'class':'zebra'}).find_all('tr')]
results = {}
for country, url in countries:
  d.get(f'https://www.schoolholidayseurope.eu{url}')
  with get_table(d.page_source) as source:
     results[country] = source

def clean_results(_data):
  [headers, _], *data = _data
  return [dict(zip(headers, i)) for _, i in data]

final_countries = {a:clean_results(b) for a, b in results.items()}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM