简体   繁体   English

python线程中的线程太多-递归遍历

[英]Too many threads in python threading - Recursive traversal

I have a script to traverse an AWS S3 bucket to do some aggregation at the file level. 我有一个脚本来遍历一个AWS S3存储桶以在文件级别进行一些聚合。

from threading import Semaphore, Thread
class Spider:
    def __init__(self):
        self.sem = Semaphore(120)
        self.threads = list()

    def crawl(self, root_url):
        self.recursive_harvest_subroutine(root_url)
        for thread in self.threads:
            thread.join()

    def recursive_harvest_subroutine(self, url):
        children = get_direct_subdirs(url)
        self.sem.acquire()
        if len(children) == 0:
            queue_url_to_do_something_later(url)  # Done
        else:
            for child_url in children:
                try:
                    thread = Thread(target=self.recursive_harvest_subroutine, args=(url,))
                    self.threads.append(thread)
                    thread.start()
        self.sem.release()

This used to run okay, until I encountered a bucket of several TB of data with hundreds of thousand sub-directories. 过去一直运行良好,直到遇到一堆包含数十万个子目录的几TB数据。 The number of Thread objects in self.threads increases very fast and soon the server reported to me self.threads中Thread对象的数量增加非常快,服务器很快向我报告了

RuntimeError: can't start new thread

There is some extra processing I have to do in the script so I can't just get all files from the bucket. 我需要在脚本中做一些额外的处理,所以我不能只从存储桶中获取所有文件。

Currently I'm putting a depth of at least 2 before the script can go parallelized but it's just a workaround. 目前,在脚本可以并行化之前,我将深度至少设置为2,但这只是一种解决方法。 Any suggestion is appreciated. 任何建议表示赞赏。

So the way the original piece of code worked was BFS, which created a lot of waiting threads in queue. 因此,原始代码的工作方式是BFS,BFS在队列中创建了许多等待线程。 I changed it to DFS and everything is working fine. 我将其更改为DFS,一切正常。 Pseudo code in case someone needs this in the future: 伪代码,以防将来有人需要:

    def __init__(self):
        self.sem = Semaphore(120)
        self.urls = list()
        self.mutex = Lock()

    def crawl(self, root_url):
        self.recursive_harvest_subroutine(root_url)
        while not is_done():
            self.sem.acquire()
            url = self.urls.pop(0)
            thread = Thread(target=self.recursive_harvest_subroutine, args=(url,))
            thread.start()
            self.sem.release()

    def recursive_harvest_subroutine(self, url):
        children = get_direct_subdirs(url)
        if len(children) == 0:
            queue_url_to_do_something_later(url)  # Done
        else:
            self.mutex.acquire()
            for child_url in children:
                self.urls.insert(0, child_url)
            self.mutex.release()

No join() so I implemented my own is_done() check. 没有join()所以我实现了自己的is_done()检查。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM