简体   繁体   中英

How to fix 'TypeError: can't pickle _thread.lock objects' when passing a Queue to a thread in a child process

I've been stuck on this issue all day, and I have not been able to find any solutions relating to what I am trying to accomplish.

I am trying to pass Queues to threads spawned in sub-processes. The Queues were created in the entrance file and passed to each sub-process as a parameter.

I am making a modular program to a) run a neural network b) automatically update the network models when needed c) log events/images from the neural network to the servers. My former program idolized only one CPU-core running multiple threads and was getting quite slow, so I decided I needed to sub-process certain parts of the program so they can run in their own memory spaces to their fullest potential.

Sub-process:

  1. Client-Server communication
  2. Webcam control and image processing
  3. Inferencing for the neural networks (there are 2 neural networks with their own process each)

4 total sub-processes.

As I develop, I need to communicate across each process so they are all on the same page with events from the servers and whatnot. So Queue would be the best option as far as I can tell.

(Clarify: 'Queue' from the 'multiprocessing' module, NOT the 'queue' module)

~~ However ~~

Each of these sub-processes spawn their own thread(s). For example, the 1st sub-process will spawn multiple threads: One thread per Queue to listen to the events from the different servers and hand them to different areas of the program; one thread to listen to the Queue receiving images from one of the neural networks; one thread to listen to the Queue receiving live images from the webcam; and one thread to listen to the Queue receiving the output from the other neural network.

I can pass the Queues to the sub-processes without issue and can use them effectively. However, when I try to pass them to the threads within each sub-process, I get the above error.

I am fairly new to multiprocessing; however, the methodology behind it looks to be relatively the same as threads except for the shared memory space and GIL.

This is from Main.py; the program entrance.

from lib.client import Client, Image

from multiprocessing import Queue, Process

class Main():

    def __init__(self, server):

        self.KILLQ = Queue()
        self.CAMERAQ = Queue()

        self.CLIENT = Client((server, 2005), self.KILLQ, self.CAMERAQ)
        self.CLIENT_PROCESS = Process(target=self.CLIENT.do, daemon=True)

        self.CLIENT_PROCESS.start()

if __name__ == '__main__':
    m = Main('127.0.0.1')
    while True:
        m.KILLQ.put("Hello world")

And this is from client.py (in a folder called lib)

class Client():

    def __init__(self, connection, killq, cameraq):

        self.TCP_IP = connection[0]
        self.TCP_PORT = connection[1]

        self.CAMERAQ = cameraq
        self.KILLQ = killq

        self.BUFFERSIZE = 1024
        self.HOSTNAME = socket.gethostname()

        self.ATTEMPTS = 0

        self.SHUTDOWN = False

        self.START_CONNECTION = MakeConnection((self.TCP_IP, self.TCP_PORT))

        # self.KILLQ_THREAD = Thread(target=self._listen, args=(self.KILLQ,), daemon=True)

        # self.KILLQ_THREAD.start()

    def do(self):
        # The function ran as the subprocess from Main.py
        print(self.KILLQ.get())

    def _listen(self, q):
        # This is threaded multiple times listening to each Queue (as 'q' that is passed when the thread is created)
        while True:
            print(self.q.get())

# self.KILLQ_THREAD = Thread(target=self._listen, args=(self.KILLQ,), daemon=True)

This is where the error is thrown. If I leave this line commented, the program runs fine. I can read from the queue in this sub-process without issue (ie the function 'do') not in a thread under this sub-process (ie the function '_listen').

I need to be able to communicate across each process so they can be in step with the main program (ie in the case of a neural network model update, the inference sub-process needs to shut down so the model can be updated without causing errors).

Any help with this would be great!

I am also very open to other methods of communication that would work as well. In the event that you believe a better communication process would work; it would need to be fast enough to support real-time streaming of 4k images sent to the server from the camera.

Thank you very much for your time! :)

The queue is not the problem. The ones from the multiprocessing package are designed to be picklable, so that they can be shared between processes.

The issue is, that your thread KILLQ_THREAD is created in the main process. Threads are not to be shared between processes. In fact, when a process is forked following POSIX standards, threads that are active in the parent process are not part of the process image that is cloned to the new child's memory space. One reason is that the state of mutexes at the time of the call to fork() might lead to deadlocks in the child process.

You'll have to move the creation of your thread to your child process, ie

def do(self):
    self.KILLQ_THREAD = Thread(target=self._listen, args=(self.KILLQ,), daemon=True)
    self.KILLQ_THREAD.start()

Presumably, KILLQ is supposed to signal the child processes to shut down. In that case, especially if you plan to use more than one child process, a queue is not the best method to achieve that. Since Queue.get() and Queue.get_nowait() remove the item from the queue, each item can only be retrieved and processed by one consumer. Your producer would have to put multiple shutdown signals into the queue. In a multi-consumer scenario, you also have no reasonable way to ensure that a specific consumer receives any specific item. Any item put into the queue can potentially be retrieved by any of the consumers reading from it.

For signalling, especially with multiple recipients, better use Event

You'll also notice, that your program appears to hang quickly after starting it. That's because you start both, your child process and the thread with daemon=True .

When your Client.do() method looks like above, ie creates and starts the thread, then exits, your child process ends right after the call to self.KILLQ_THREAD.start() and the daemonic thread immediately ends with it. Your main process does not notice anything and continues to put Hello world into the queue until it eventually fills up and queue.Full raises.

Here's a condensed code example using an Event for shutdown signalling in two child processes with one thread each.

main.py

import time    
from lib.client import Client
from multiprocessing import Process, Event

class Main:

    def __init__(self):
        self.KILLQ = Event()
        self._clients = (Client(self.KILLQ), Client(self.KILLQ))
        self._procs = [Process(target=cl.do, daemon=True) for cl in self._clients]
        [proc.start() for proc in self._procs]

if __name__ == '__main__':
    m = Main()
    # do sth. else
    time.sleep(1)
    # signal for shutdown
    m.KILLQ.set()
    # grace period for both shutdown prints to show
    time.sleep(.1)

client.py

import multiprocessing
from threading import Thread

class Client:

    def __init__(self, killq):
        self.KILLQ = killq

    def do(self):
        # non-daemonic thread! We want the process to stick around until the thread 
        # terminates on the signal set by the main process
        self.KILLQ_THREAD = Thread(target=self._listen, args=(self.KILLQ,))
        self.KILLQ_THREAD.start()

    @staticmethod
    def _listen(q):
        while not q.is_set():
            print("in thread {}".format(multiprocessing.current_process().name))
        print("{} - master signalled shutdown".format(multiprocessing.current_process().name))

Output

[...]
in thread Process-2
in thread Process-1
in thread Process-2
Process-2 - master signalled shutdown
in thread Process-1
Process-1 - master signalled shutdown

Process finished with exit code 0

As for methods of inter process communication, you might want to look into a streaming server solution. Miguel Grinberg has written an excellent tutorial on Video Streaming with Flask back in 2014 with a more recent follow-up from August 2017 .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM