簡體   English   中英

代碼到達start()時,Python中的多處理崩潰

[英]Multiprocessing in Python crashes when code reaches start()

我是Python新手。 我嘗試使用一些多處理程序來加快工作速度。 首先,我嘗試了一個例子,一切正常。 這是代碼:

from multiprocessing import Process
import time

def f(name, n, m):
    if name == 'bob':
        time.sleep(2)
    print 'hello', name, ' ', n, m

def h():
    g(1, 2, 3)

def g(a, s, d):
    p = Process(target=f, args=('bob', a, s,))
    t = Process(target=f, args=('helen', s, d,))
    p.start()
    t.start()
    t.join()
    p.join()
    print("END")

if __name__ == '__main__':
    print("Start")
    h()

之后,我對代碼使用了相同的技術,並出現了錯誤。 這是有問題的代碼的一部分:

if __name__ == "__main__":
    night_crawler_steam()

def night_crawler_steam():
    .
    .
    .
    multi_processing(max_pages, url, dirname)
    .
    .
    .

def multi_processing(max_pages, url, dirname):
    page = 1
    while page <= max_pages:
        my_url = str(url) + str(page)
        soup = my_soup(my_url)
        fgt = Process(target=find_game_titles, args=(soup, page, dirname,))
        fl = Process(target=find_links, args=(soup, page, dirname,))
        fgt.start() #<-----------Here is the problem
        fl.start()
        fgt.join()
        fl.join()
        page += 1

def find_links(soup, page, dirname):
.
.
.

def find_game_titles(soup, page, dirname):
.
.
.

當解釋器到達fgt.start()時,會出現一些錯誤:

Traceback (most recent call last):
  File "C:/Users/��������/Desktop/MY PyWORK/NightCrawler/NightCrawler.py", line 120, in <module>
    night_crawler_steam()
  File "C:/Users/��������/Desktop/MY PyWORK/NightCrawler/NightCrawler.py", line 23, in night_crawler_steam
Traceback (most recent call last):
  File "<string>", line 1, in <module>
    multi_processing(max_pages, url, dirname)
  File "C:/Users/��������/Desktop/MY PyWORK/NightCrawler/NightCrawler.py", line 47, in multi_processing
    fgt.start()
  File "C:\Python27\lib\multiprocessing\process.py", line 130, in start
    self._popen = Popen(self)
  File "C:\Python27\lib\multiprocessing\forking.py", line 277, in __init__
  File "C:\Python27\lib\multiprocessing\forking.py", line 381, in main
    dump(process_obj, to_child, HIGHEST_PROTOCOL)
  File "C:\Python27\lib\multiprocessing\forking.py", line 199, in dump
    self = load(from_parent)
  File "C:\Python27\lib\pickle.py", line 1384, in load
    ForkingPickler(file, protocol).dump(obj)
  File "C:\Python27\lib\pickle.py", line 224, in dump
    self.save(obj)
  File "C:\Python27\lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\lib\pickle.py", line 425, in save_reduce
    return Unpickler(file).load()
  File "C:\Python27\lib\pickle.py", line 864, in load
    save(state)
  File "C:\Python27\lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\lib\pickle.py", line 655, in save_dict
    dispatch[key](self)
  File "C:\Python27\lib\pickle.py", line 886, in load_eof
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\lib\pickle.py", line 687, in _batch_setitems
    raise EOFError
    save(v)
EOFError

這一直持續到RuntimeError: maximum recursion depth exceeded

任何想法都會有所幫助!

酸洗 soup似乎存在問題(請參閱《 編程指南》 ),因此一個簡單的解決方案是將調用my_soup(my_url)移到目標函數中,如下所示:

def multi_processing(max_pages, url, dirname):        
    p=Pool() # using a pool is not necessary to fix your problem
     for page in xrange(1,max_pages+1):
        my_url = str(url) + str(page)
        p.apply_async(find_game_titles, (my_url, page, dirname))
        p.apply_async(find_links, (my_url, page, dirname))
    p.close()
    p.join()

def find_links(url,page, dirname):
    soup=my_soup(url)
    #function body from before


def find_game_titles(url, page, dirname):
    soup=my_soup(url)
    #function body from before

(當然,您也可以采用可腌制的格式傳遞湯,但是取決於my_soup所做的事情,它可能值得也可能不值得。)

雖然不是完全必要,但通常將if __name__=="__main__":部分放在文件末尾。

您也想看看multiprocessing.Pool的其他方法,因為它們可能更適合,這取決於您的功能。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM