繁体   English   中英

使用 Python 和 Beautiful Soup 进行多页网页抓取

[英]Multiple Pages Web Scraping with Python and Beautiful Soup

我正在尝试编写一个代码来从有关酒店的页面中抓取一些日期。 最终信息(酒店名称和地址)应导出到 csv。 该代码有效,但仅在一页上...

import requests
import pandas as pd
from bs4 import BeautifulSoup # HTML data structure

page_url = requests.get('https://e-turysta.pl/noclegi-krakow/')
soup = BeautifulSoup(page_url.content, 'html.parser')

list = soup.find(id='nav-lista-obiektow')
items = list.find_all(class_='et-list__details flex-grow-1 d-flex d-md-block flex-column')

nazwa_noclegu = [item.find(class_='h3 et-list__details__name').get_text() for item in items]
adres_noclegu = [item.find(class_='et-list__city').get_text() for item in items]

dane = pd.DataFrame(
    {
        'nazwa' : nazwa_noclegu,
        'adres' : adres_noclegu
    }
)

print(dane)

dane.to_csv('noclegi.csv')

我尝试了一个循环但不起作用:

for i in range(22):
    url = requests.get('https://e-turysta.pl/noclegi-krakow/'.format(i+1)).text
    soup = BeautifulSoup(url, 'html.parser')

有任何想法吗?

在您的循环中,您使用.format()函数,但需要将括号插入您正在格式化的字符串中。

for i in range(22):
    url = requests.get('https://e-turysta.pl/noclegi-krakow/{}'.format(i+1)).text
    soup = BeautifulSoup(url, 'html.parser')

网址与您使用的网址不同 - 您忘记了?page=

并且您必须使用{}向字符串添加值

url = 'https://e-turysta.pl/noclegi-krakow/?page={}'.format(i+1)

或连接它

url = 'https://e-turysta.pl/noclegi-krakow/?page=' + str(i+1)

编辑:工作代码

import requests
from bs4 import BeautifulSoup # HTML data structure
import pandas as pd

def get_page_data(number):
    print('number:', number)

    url = 'https://e-turysta.pl/noclegi-krakow/?page={}'.format(number)
    response = requests.get(url)
    soup = BeautifulSoup(response.content, 'html.parser')

    container = soup.find(id='nav-lista-obiektow')
    items = container.find_all(class_='et-list__details flex-grow-1 d-flex d-md-block flex-column')

    # better group them - so you could add default value if there is no nazwa or adres
    dane = []

    for item in items:
        nazwa = item.find(class_='h3 et-list__details__name').get_text(strip=True)
        adres = item.find(class_='et-list__city').get_text(strip=True)
        dane.append([nazwa, adres])

    return dane

# --- main ---

wszystkie_dane = []
for number in range(1, 23):
    dane_na_stronie = get_page_data(number)
    wszystkie_dane.extend(dane_na_stronie)

dane = pd.DataFrame(wszystkie_dane, columns=['nazwa', 'adres'])

dane.to_csv('noclegi.csv', index=False)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM