简体   繁体   English

在将Queue传递给子进程中的线程时,如何解决“ TypeError:无法腌制_thread.lock对象”

[英]How to fix 'TypeError: can't pickle _thread.lock objects' when passing a Queue to a thread in a child process

I've been stuck on this issue all day, and I have not been able to find any solutions relating to what I am trying to accomplish. 我整天都在这个问题上受困,而且我找不到与我要完成的任务有关的任何解决方案。

I am trying to pass Queues to threads spawned in sub-processes. 我试图将Queues传递给子进程中产生的线程。 The Queues were created in the entrance file and passed to each sub-process as a parameter. 队列是在入口文件中创建的,并作为参数传递给每个子流程。

I am making a modular program to a) run a neural network b) automatically update the network models when needed c) log events/images from the neural network to the servers. 我正在制作一个模块化程序,以便a)运行神经网络b)在需要时自动更新网络模型c)将事件/图像从神经网络记录到服务器。 My former program idolized only one CPU-core running multiple threads and was getting quite slow, so I decided I needed to sub-process certain parts of the program so they can run in their own memory spaces to their fullest potential. 我以前的程序只是偶像一个运行多个线程的CPU内核,而且运行速度相当慢,因此我决定需要对程序的某些部分进行子处理,以便它们可以在自己的内存空间中发挥最大的潜力。

Sub-process: 子过程:

  1. Client-Server communication 客户端-服务器通信
  2. Webcam control and image processing 网络摄像头控制和图像处理
  3. Inferencing for the neural networks (there are 2 neural networks with their own process each) 神经网络的推理(有两个神经网络,每个神经网络都有各自的过程)

4 total sub-processes. 共4个子流程。

As I develop, I need to communicate across each process so they are all on the same page with events from the servers and whatnot. 在开发过程中,我需要在每个流程之间进行交流,以使它们都与来自服务器等的事件都在同一页面上。 So Queue would be the best option as far as I can tell. 据我所知,Queue是最好的选择。

(Clarify: 'Queue' from the 'multiprocessing' module, NOT the 'queue' module) (澄清:来自“多重处理”模块的“队列”,而不是来自“队列”模块的)

~~ However ~~ ~~但是~~

Each of these sub-processes spawn their own thread(s). 这些子流程中的每一个都产生自己的线程。 For example, the 1st sub-process will spawn multiple threads: One thread per Queue to listen to the events from the different servers and hand them to different areas of the program; 例如,第一个子进程将产生多个线程:每个队列一个线程,以侦听来自不同服务器的事件并将它们传递到程序的不同区域; one thread to listen to the Queue receiving images from one of the neural networks; 一个线程侦听Queue从其中一个神经网络接收图像; one thread to listen to the Queue receiving live images from the webcam; 一个线程监听队列,从网络摄像头接收实时图像; and one thread to listen to the Queue receiving the output from the other neural network. 一个线程监听Queue从另一个神经网络接收的输出。

I can pass the Queues to the sub-processes without issue and can use them effectively. 我可以毫无问题地将队列传递给子流程,并可以有效地使用它们。 However, when I try to pass them to the threads within each sub-process, I get the above error. 但是,当我尝试将它们传递给每个子进程中的线程时,出现上述错误。

I am fairly new to multiprocessing; 我对多重处理还很陌生。 however, the methodology behind it looks to be relatively the same as threads except for the shared memory space and GIL. 但是,除了共享内存空间和GIL之外,其背后的方法看起来与线程相对相同。

This is from Main.py; 这是来自Main.py; the program entrance. 程序入口。

from lib.client import Client, Image

from multiprocessing import Queue, Process

class Main():

    def __init__(self, server):

        self.KILLQ = Queue()
        self.CAMERAQ = Queue()

        self.CLIENT = Client((server, 2005), self.KILLQ, self.CAMERAQ)
        self.CLIENT_PROCESS = Process(target=self.CLIENT.do, daemon=True)

        self.CLIENT_PROCESS.start()

if __name__ == '__main__':
    m = Main('127.0.0.1')
    while True:
        m.KILLQ.put("Hello world")

And this is from client.py (in a folder called lib) 这是来自client.py(在名为lib的文件夹中)

class Client():

    def __init__(self, connection, killq, cameraq):

        self.TCP_IP = connection[0]
        self.TCP_PORT = connection[1]

        self.CAMERAQ = cameraq
        self.KILLQ = killq

        self.BUFFERSIZE = 1024
        self.HOSTNAME = socket.gethostname()

        self.ATTEMPTS = 0

        self.SHUTDOWN = False

        self.START_CONNECTION = MakeConnection((self.TCP_IP, self.TCP_PORT))

        # self.KILLQ_THREAD = Thread(target=self._listen, args=(self.KILLQ,), daemon=True)

        # self.KILLQ_THREAD.start()

    def do(self):
        # The function ran as the subprocess from Main.py
        print(self.KILLQ.get())

    def _listen(self, q):
        # This is threaded multiple times listening to each Queue (as 'q' that is passed when the thread is created)
        while True:
            print(self.q.get())

# self.KILLQ_THREAD = Thread(target=self._listen, args=(self.KILLQ,), daemon=True)

This is where the error is thrown. 这是引发错误的地方。 If I leave this line commented, the program runs fine. 如果我将此行保留为注释,则程序运行正常。 I can read from the queue in this sub-process without issue (ie the function 'do') not in a thread under this sub-process (ie the function '_listen'). 我可以在此子进程(即函数“ _listen”)下的线程中没有问题(即函数“ do”)的队列中进行读取。

I need to be able to communicate across each process so they can be in step with the main program (ie in the case of a neural network model update, the inference sub-process needs to shut down so the model can be updated without causing errors). 我需要能够跨每个流程进行交流,以便它们可以与主程序保持一致(即,在神经网络模型更新的情况下,需要关闭推理子流程,以便可以在不引起错误的情况下更新模型)。

Any help with this would be great! 任何帮助都会很棒!

I am also very open to other methods of communication that would work as well. 我也对其他可行的交流方法持开放态度。 In the event that you believe a better communication process would work; 如果您认为更好的沟通流程会起作用; it would need to be fast enough to support real-time streaming of 4k images sent to the server from the camera. 它需要足够快以支持从相机发送到服务器的4k图像的实时流传输。

Thank you very much for your time! 非常感谢您的宝贵时间! :) :)

The queue is not the problem. 队列不是问题。 The ones from the multiprocessing package are designed to be picklable, so that they can be shared between processes. multiprocessing程序包中的程序被设计为可腌制的,因此它们可以在进程之间共享。

The issue is, that your thread KILLQ_THREAD is created in the main process. 问题是,您的线程KILLQ_THREAD是在主进程中创建的。 Threads are not to be shared between processes. 线程不得在进程之间共享。 In fact, when a process is forked following POSIX standards, threads that are active in the parent process are not part of the process image that is cloned to the new child's memory space. 实际上,当一个进程遵循POSIX标准进行分叉时,父进程中处于活动状态的线程属于克隆到新子级内存空间的进程映像的一部分。 One reason is that the state of mutexes at the time of the call to fork() might lead to deadlocks in the child process. 原因之一是在调用fork()时互斥锁的状态可能导致子进程死锁。

You'll have to move the creation of your thread to your child process, ie 您必须将线程的创建移至子进程,即

def do(self):
    self.KILLQ_THREAD = Thread(target=self._listen, args=(self.KILLQ,), daemon=True)
    self.KILLQ_THREAD.start()

Presumably, KILLQ is supposed to signal the child processes to shut down. 据推测, KILLQ应该发信号通知子进程关闭。 In that case, especially if you plan to use more than one child process, a queue is not the best method to achieve that. 在这种情况下,尤其是如果您计划使用多个子进程,队列并不是实现该目标的最佳方法。 Since Queue.get() and Queue.get_nowait() remove the item from the queue, each item can only be retrieved and processed by one consumer. 由于Queue.get()Queue.get_nowait()从队列中删除了该项目,因此每个项目只能由一个使用者检索和处理。 Your producer would have to put multiple shutdown signals into the queue. 生产者必须将多个关闭信号放入队列。 In a multi-consumer scenario, you also have no reasonable way to ensure that a specific consumer receives any specific item. 在多消费者方案中,您也没有合理的方法来确保特定消费者收到任何特定商品。 Any item put into the queue can potentially be retrieved by any of the consumers reading from it. 从队列中读取的任何消费者都可以检索放入队列中的任何项目。

For signalling, especially with multiple recipients, better use Event 对于发信号,尤其是对于多个收件人,最好使用Event

You'll also notice, that your program appears to hang quickly after starting it. 您还会注意到,程序启动后似乎很快挂起。 That's because you start both, your child process and the thread with daemon=True . 这是因为您同时启动了子进程和带有daemon=True的线程。

When your Client.do() method looks like above, ie creates and starts the thread, then exits, your child process ends right after the call to self.KILLQ_THREAD.start() and the daemonic thread immediately ends with it. 当您的Client.do()方法如上所示(即创建并启动线程,然后退出)时,您的子进程将在对self.KILLQ_THREAD.start()的调用之后立即结束,并且守护线程将立即终止。 Your main process does not notice anything and continues to put Hello world into the queue until it eventually fills up and queue.Full raises. 您的主进程什么都没注意到,并继续将Hello world放入队列,直到最终填满并queue.Full引发。

Here's a condensed code example using an Event for shutdown signalling in two child processes with one thread each. 这是一个精简的代码示例,其中使用一个Event在两个子进程(每个线程一个)中关闭信号。

main.py main.py

import time    
from lib.client import Client
from multiprocessing import Process, Event

class Main:

    def __init__(self):
        self.KILLQ = Event()
        self._clients = (Client(self.KILLQ), Client(self.KILLQ))
        self._procs = [Process(target=cl.do, daemon=True) for cl in self._clients]
        [proc.start() for proc in self._procs]

if __name__ == '__main__':
    m = Main()
    # do sth. else
    time.sleep(1)
    # signal for shutdown
    m.KILLQ.set()
    # grace period for both shutdown prints to show
    time.sleep(.1)

client.py client.py

import multiprocessing
from threading import Thread

class Client:

    def __init__(self, killq):
        self.KILLQ = killq

    def do(self):
        # non-daemonic thread! We want the process to stick around until the thread 
        # terminates on the signal set by the main process
        self.KILLQ_THREAD = Thread(target=self._listen, args=(self.KILLQ,))
        self.KILLQ_THREAD.start()

    @staticmethod
    def _listen(q):
        while not q.is_set():
            print("in thread {}".format(multiprocessing.current_process().name))
        print("{} - master signalled shutdown".format(multiprocessing.current_process().name))

Output 产量

[...]
in thread Process-2
in thread Process-1
in thread Process-2
Process-2 - master signalled shutdown
in thread Process-1
Process-1 - master signalled shutdown

Process finished with exit code 0

As for methods of inter process communication, you might want to look into a streaming server solution. 至于进程间通信的方法,您可能需要研究流服务器解决方案。 Miguel Grinberg has written an excellent tutorial on Video Streaming with Flask back in 2014 with a more recent follow-up from August 2017 . Miguel Grinberg早在2014年就撰写了有关Flask视频流的出色教程, 并于2017年8月进行了后续跟踪

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用 Queue() 进行多处理:TypeError: can't pickle _thread.lock objects - Multiprocessing with Queue(): TypeError: can't pickle _thread.lock objects Joblib错误:TypeError:无法腌制_thread.lock对象 - Joblib error: TypeError: can't pickle _thread.lock objects Keras:TypeError:无法使用KerasClassifier来pickle _thread.lock对象 - Keras: TypeError: can't pickle _thread.lock objects with KerasClassifier Keras 2,TypeError:无法pickle _thread.lock对象 - Keras 2, TypeError: can't pickle _thread.lock objects Keras模型:TypeError:无法腌制_thread.lock对象 - Keras model: TypeError: can't pickle _thread.lock objects 收到TypeError:无法腌制_thread.lock对象 - Getting TypeError: can't pickle _thread.lock objects 使用 'spawn' 启动 redis 进程但面临 TypeError: can't pickle _thread.lock objects - Using 'spawn' to start a redis process but facing TypeError: can't pickle _thread.lock objects TypeError:训练keras模型时无法pickle _thread.lock对象 - TypeError: can't pickle _thread.lock objects when training keras model Cassandra多处理无法腌制_thread.lock对象 - Cassandra multiprocessing can't pickle _thread.lock objects Keras Lambda图层和变量:“TypeError:无法pickle _thread.lock对象” - Keras Lambda layer and variables : “TypeError: can't pickle _thread.lock objects”
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM