简体   繁体   English

使用 'spawn' 启动 redis 进程但面临 TypeError: can't pickle _thread.lock objects

[英]Using 'spawn' to start a redis process but facing TypeError: can't pickle _thread.lock objects

I have to use 'spawn' to start process, cause i need to transport cuda tensor between processes.我必须使用“spawn”来启动进程,因为我需要在进程之间传输 cuda 张量。 But using 'spawn' to create redis process always facing TypeError: can't pickle _thread.lock objects但是使用 'spawn' 创建 redis 进程总是面临 TypeError: can't pickle _thread.lock objects

for some reason this code delete some part由于某种原因,此代码删除了某些部分

it seems that only use 'fork' could work fine似乎只使用“fork”就可以正常工作

import redis
from torch.multiprocessing import Process

class Buffer(Process):

    def __init__(self, name=0, num_peers=2, actor_queue=0, communicate_queue=0):
        Process.__init__(self)
      
        #some arguments
        self.actor_queue = actor_queue
        self.communicate_queue = communicate_queue
       
        pool = redis.ConnectionPool(host='localhost', port=6379, decode_responses=True)
        self.r = redis.Redis(connection_pool=pool)
        self.r.flushall()

    async def write(self, r):
    #do sth

    async def aggregate(self, r):
    #do sth

    def run(self):
        name_process = mp.current_process().name + str(mp.current_process().pid)
        print('starting...', name_process)
        loop = asyncio.get_event_loop()
        asyncio.set_event_loop(loop)
        tasks = asyncio.gather(
            loop.create_task(self.write(self.r)),
            loop.create_task(self.aggregate(self.r)),
        )
        try:
            loop.run_until_complete(tasks)
        finally:
            loop.close()

if __name__ == '__main__':
    mp.set_start_method('spawn')

    queue = mp.Queue(maxsize=5)
    queue.put('sth')
    name = 'yjsp'
    num_peers = 2
    p =Buffer(name, num_peers, queue, c_queue)
    p.start()

problem solved!问题解决了!

we should define pool and other things in run()我们应该在 run() 中定义池和其他东西

Here is the reason: thread live inside process and the process spin up child process to enable parallel.原因如下:线程存在于进程内部,进程启动子进程以启用并行。 threads need locks to keep resources problems away like multiple process acquire same resources and cause dead lock.线程需要锁来避免资源问题,就像多个进程获取相同的资源并导致死锁一样。

If we define pool in run(), we are already in child process when we get into run() method.如果我们在 run() 中定义池,当我们进入 run() 方法时,我们已经在子进程中。

just like this像这样

    def run(self):
        pool = redis.ConnectionPool(host='localhost', port=6379, decode_responses=True)
        r = redis.Redis(connection_pool=pool)
        r.flushall()

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在将Queue传递给子进程中的线程时,如何解决“ TypeError:无法腌制_thread.lock对象” - How to fix 'TypeError: can't pickle _thread.lock objects' when passing a Queue to a thread in a child process Joblib错误:TypeError:无法腌制_thread.lock对象 - Joblib error: TypeError: can't pickle _thread.lock objects Keras:TypeError:无法使用KerasClassifier来pickle _thread.lock对象 - Keras: TypeError: can't pickle _thread.lock objects with KerasClassifier Keras 2,TypeError:无法pickle _thread.lock对象 - Keras 2, TypeError: can't pickle _thread.lock objects Keras模型:TypeError:无法腌制_thread.lock对象 - Keras model: TypeError: can't pickle _thread.lock objects 收到TypeError:无法腌制_thread.lock对象 - Getting TypeError: can't pickle _thread.lock objects 使用 Queue() 进行多处理:TypeError: can't pickle _thread.lock objects - Multiprocessing with Queue(): TypeError: can't pickle _thread.lock objects Keras Lambda图层和变量:“TypeError:无法pickle _thread.lock对象” - Keras Lambda layer and variables : “TypeError: can't pickle _thread.lock objects” 多处理,Python3,Windows:TypeError:无法腌制 _thread.lock 对象 - Multiprocessing, Python3, Windows: TypeError: can't pickle _thread.lock objects 使用HappyBase连接池的PySpark dataframe.foreach()返回'TypeError:无法pickle thread.lock对象' - PySpark dataframe.foreach() with HappyBase connection pool returns 'TypeError: can't pickle thread.lock objects'
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM