簡體   English   中英

Python Newspapers3k Newspapers 庫多線程無限期掛起

[英]Python Newspapers3k Newspapers library mutithreading hangs indefinitely

我正在從事一個從游戲媒體網站提取文章的項目,我正在做一個基本的測試運行,根據 VSCode 的調試器一直掛在我設置多線程提取(更改線程數沒有幫助)的兩個站點。 老實說,我不確定我在這里做錯了什么。 我遵循了已經列出的例子。 其中一個站點 Gamespot 甚至用於某人的教程中,我嘗試刪除另一個站點(Polygon),但似乎沒有幫助。 我已經創建了一個虛擬環境,並在 Python 3.8 和 3.7 中進行了嘗試。 所有依賴項似乎都得到滿足; 我也在 repl dot it 中進行了測試,並得到了相同的掛起。

我很想知道我做錯了什么,所以我可以修復它; 我真的很想在這些特定的網站和他們的文章上做一些數據科學! 但似乎至少對於 OS X 用戶來說,多線程存在某種錯誤。 這是我的代碼:

#import system functions
import sys
import requests
sys.path.append('/usr/local/lib/python3.8/site-packages/')
#import basic HTTP handling processes
#import urllib
#from urllib.request import urlopen
#import scraping libraries

#import newspaper and BS dependencies

from bs4 import BeautifulSoup
import newspaper
from newspaper import Article 
from newspaper import Source 
from newspaper import news_pool

#import broad data libraries
import pandas as pd

#import gaming related news sources as newspapers
gamespot = newspaper.build('https://www.gamespot.com/news', memoize_articles=False)
polygon = newspaper.build('https://www.polygon.com/gaming', memoize_articles=False)

#organize the gaming related news sources using a list
gamingPress = [gamespot, polygon]
print("About to set the pool.")
#parallel process these articles using multithreading (store in mem)
news_pool.set(gamingPress, threads_per_source=4)
print("Setting the pool")
news_pool.join()
print("Pool set")
#create the interim pandas dataframe based on these sources
final_df = pd.DataFrame()

#a limit on sources could be placed here; intentionally I have placed none
limit = 10

for source in gamingPress:
    #these are temporary placeholder lists for elements to be extracted
    list_title = []
    list_text = []
    list_source = []

    count = 0

    for article_extract in source.articles:
        article_extract.parse()
        
        #further limit functionality could be placed here; not placed
        if count > limit:
            break

        list_title.append(article_extract.title)
        list_text.append(article_extract.text)
        list_source.apprend(article_extract.source_url)

        print(count)
        count +=1 #progress the loop *via* count

    temp_df = pd.DataFrame({'Title': list_title, 'Text': list_text, 'Source': list_source})
    #Append this to the final DataFrame
    final_df = final_df.append(temp_df, ignore_index=True)

#export to CSV, placeholder for deeper analysis/more limited scope, may remain
final.df.to_csv('gaming_press.csv')

這是我最終放棄並在控制台上中斷時得到的回報:


About to set the pool.
Setting the pool
^X^X^CTraceback (most recent call last):
  File "scraper1.py", line 31, in <module>
    news_pool.join()
  File "/usr/local/lib/python3.8/site-packages/newspaper3k-0.3.0-py3.8.egg/newspaper/mthreading.py", line 103, in join
    self.pool.wait_completion()
  File "/usr/local/lib/python3.8/site-packages/newspaper3k-0.3.0-py3.8.egg/newspaper/mthreading.py", line 63, in wait_completion
    self.tasks.join()
  File "/usr/local/Cellar/python@3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/queue.py", line 89, in join
    self.all_tasks_done.wait()
  File "/usr/local/Cellar/python@3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 302, in wait
    waiter.acquire()
KeyboardInterrupt

我決定研究報紙多線程問題。 我查看了 github 上Newspaper的源代碼並設計了這個答案。 在我的測試中,我能夠獲得文章標題。

看起來這個處理很耗時,因為它平均需要 6 分鍾。 經過更多研究,時間延遲似乎與在后台下載的文章直接相關。 我不確定如何使用Newspaper加快此過程。

import newspaper
from newspaper import Config
from newspaper import news_pool

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'

config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10

gamespot = newspaper.build('https://www.gamespot.com/news', config=config, memoize_articles=False)
polygon = newspaper.build('https://www.polygon.com/gaming', config=config, memoize_articles=False)

gamingPress = [gamespot, polygon]

# this setting is adjustable 
news_pool.config.number_threads = 2

# this setting is adjustable 
news_pool.config.thread_timeout_seconds = 2

news_pool.set(gamingPress)
news_pool.join()

for source in gamingPress:
  for article_extract in source.articles:
    article_extract.parse()
    print(article_extract.title)

老實說,我正在嘗試確定使用news_pool的好處。 Newspaper源代碼中的注釋看來, news_pool 的主要目的與連接速率限制有關。 我還注意到,已經多次嘗試改進線程模型,但這些代碼更新尚未推送到生產代碼中。

盡管如此......下面的答案在 1 分鍾內開始處理,它不使用news_pool 需要進行更多測試以查看源速率是否限制連接或出現其他問題。

import newspaper
from newspaper import Config
from newspaper import news_pool

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'

config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10

gamespot = newspaper.build('https://www.gamespot.com/news', config=config, memoize_articles=False)
polygon = newspaper.build('https://www.polygon.com/gaming', config=config, memoize_articles=False)
gamingPress = [gamespot, polygon]
for source in gamingPress:
   source.download_articles()
   for article_extract in source.articles:
      article_extract.parse()
      print(article_extract.title)

關於news_pool代碼部分。 出於某種原因,我在對您的目標源的有限測試中注意到了多余的文章標題。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM