簡體   English   中英

Web 使用 Beautifulsoup 在多頁中抓取表

[英]Web Scraping for tables in multiple pages using Beautifulsoup

我正在嘗試從多個頁面中抓取不同周的表格,但是我不斷從 url https://www.boxofficemojo.com/weekly/2018W52/獲取結果,這是我正在使用的代碼:

import requests
from requests import get
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
from time import sleep
from random import randint
import re

pages = np.arange(2015,2016)
week = ['01','02','03','04','05','06','07','08','09']
week1 = np.arange(10,11)
for x in week1:
    week.append(x)
week


mov = soup.find_all("table", attrs={"class": "a-bordered"})
print("Number of tables on site: ",len(mov))

all_rows= []
all_rows= []
for page in pages:
    for x in week:
        url = requests.get('https://www.boxofficemojo.com/weekly/'+str(page)+'W'+str(x)+'/')
        soup = BeautifulSoup(url.text, 'lxml')
        mov = soup.find_all("table", attrs={"class": "a-bordered"})
        table1 = mov[0]
        body = table1.find_all("tr")
        head = body[0] 
        body_rows = body[1:]
        sleep(randint(2,10))
        for row_num in range(len(body_rows)): 
            row = [] 
            for row_item in body_rows[row_num].find_all("td"): 
                aa = re.sub("(\xa0)|(\n)|,","",row_item.text)
                row.append(aa)
                all_rows.append(row)
                print('Page', page, x)

假設您想要每年 52 周,為什么不提前生成鏈接,然后使用 pandas 檢索表,創建此類數據幀的列表並將它們連接成最終的 dataframe?

import pandas as pd

def get_table(url):
    year = int(url[37:41])
    week_yr = int(url[42:44])
    df = pd.read_html(url)[0] 
    df['year'] = year
    df['week_yr'] = week_yr
    return df
    
years = ['2015','2016']
weeks = [str(i).zfill(2) for i in range(1, 53)]
base = 'https://www.boxofficemojo.com/weekly'
urls = [f'{base}/{year}W{week}' for week in weeks for year in years]
results = pd.concat([get_table(url, int(url.split('/')[-1][:4])) for url in urls])

然后,您可能會考慮加快速度的方法,例如

from multiprocessing import Pool, cpu_count
import pandas as pd

def get_table(url):
    year = int(url[37:41])
    week_yr = int(url[42:44])
    df = pd.read_html(url)[0] 
    df['year'] = year
    df['week_yr'] = week_yr
    return df
     
if __name__ == '__main__':
    
    years = ['2015','2016']
    weeks = [str(i).zfill(2) for i in range(1, 53)]
    base = 'https://www.boxofficemojo.com/weekly'
    urls = [f'{base}/{year}W{week}' for week in weeks for year in years]

    with Pool(cpu_count()-1) as p:
        results = p.map(get_table, urls)

    final = pd.concat(results)
    print(final)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM