簡體   English   中英

無法抓取多個網址

[英]Unable to crawl multiples URLs

我有一個 function ,其中兩個抓取網頁並查找特定的 class 並在其中找到一個 href 標簽。

url="https://www.poynter.org/ifcn-covid-19-misinformation/page/220/"

def url_parse(site):
   hdr = {'User-Agent': 'Mozilla/5.0'}
   req = Request(site,headers=hdr)
   page = urlopen(req)
   soup = BeautifulSoup(page)
   return soup

def article_link(URL):
   try:
      soup=url_parse(URL)
      for i in soup.find_all("a", class_="button entry-content__button entry-content__button--smaller"):
        link=i['href']
   except:
      pass    
return link



data['article_source']=""
for i, rows in data.iterrows():
   rows['article_source']= article_link(rows['url'])

問題

function url_parse 和 article_link 工作正常,但是當我使用 function article_link 更新數據報內的單元格時,它在 1500 或 1000 個 URL 之后停止工作。 我知道我的筆記本電腦可能有一個 IP 地址,但我不明白如何解決它,因為沒有錯誤消息。

期待

function article_link 解析數據框內的所有URL。

import requests
from bs4 import BeautifulSoup
from concurrent.futures.thread import ThreadPoolExecutor

url = "https://www.poynter.org/ifcn-covid-19-misinformation/page/{}/"

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0'
}


def main(url, num):
    with requests.Session() as req:
        print(f"Extracting Page# {num}")
        r = req.get(url.format(num), headers=headers)
        soup = BeautifulSoup(r.content, 'html.parser')
        links = [item.get("href") for item in soup.findAll(
            "a", class_="button entry-content__button entry-content__button--smaller")]
        return links


with ThreadPoolExecutor(max_workers=50) as executor:
    futures = [executor.submit(main, url, num) for num in range(1, 238)]

for future in futures:
    print(future.result())

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM