简体   繁体   中英

Multi-threading in image processing - video python opencv

Im working on object detection from a live stream video using opencv python. The program I have is running on a single thread because of that the resulting video shown on the screen doesnt even look like a video, since there is a delay in detection process. So, Im trying to re-implement it using multiple threads. I am using one thread for reading frames and another for showing the detection result and about 5 threads to run the detection algorithm on multiple frames at once. I have written the following code but the result is not different from the single thread program. Im new to python. So, any help is appreciated.

import threading, time
import cv2
import queue


def detect_object():
    while True:
        print("get")
        frame = input_buffer.get()
        if frame is not None:
            time.sleep(1)
            detection_buffer.put(frame)
        else:
            break
    return


def show():
    while True:
        print("show")
        frame = detection_buffer.get()
        if frame is not None:
            cv2.imshow("Video", frame)
        else:
            break
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    return


if __name__ == "__main__":

    input_buffer = queue.Queue()
    detection_buffer = queue.Queue()

    cap = cv2.VideoCapture(0)

    for i in range(5):
        t = threading.Thread(target=detect_object)
        t.start()

    t1 = threading.Thread(target=show)
    t1.start()

    while True:
        ret, frame = cap.read()
        if ret:
            input_buffer.put(frame)
            time.sleep(0.025)
        else:
            break

    print("program ended")

Working on the assumption that the detection algorithm is CPU-intensive, you need to be using multiprocessing instead of multithreading since multiple threads will not run Python bytecode in parallel due to contention for the Global Interpreter Lock. You should also get rid of all the calls to sleep . It is also not clear when you run multiple threads or processes the way you are doing it what guarantees that the frames will be output in the correct order, that is, the processing of the second frame could complete before the processing of the first frame and get written to the detection_buffer first.

The following uses a processing pool of 6 processes (there is no need now for an implicit input queue).

from multiprocessing import Pool, Queue
import time
import cv2

# intialize global variables for the pool processes:
def init_pool(d_b):
    global detection_buffer
    detection_buffer = d_b


def detect_object(frame):
    time.sleep(1)
    detection_buffer.put(frame)


def show():
    while True:
        print("show")
        frame = detection_buffer.get()
        if frame is not None:
            cv2.imshow("Video", frame)
        else:
            break
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    return


# required for Windows:
if __name__ == "__main__":

    detection_buffer = Queue()
    # 6 workers: 1 for the show task and 5 to process frames:
    pool = Pool(6, initializer=init_pool, initargs=(detection_buffer,))
    # run the "show" task:
    show_future = pool.apply_async(show)

    cap = cv2.VideoCapture(0)

    futures = []
    while True:
        ret, frame = cap.read()
        if ret:
            f = pool.apply_async(detect_object, args=(frame,))
            futures.append(f)
            time.sleep(0.025)
        else:
            break
    # wait for all the frame-putting tasks to complete:
    for f in futures:
        f.get()
    # signal the "show" task to end by placing None in the queue
    detection_buffer.put(None)
    show_future.get()
    print("program ended")

for me what I've done is building 2 threads for 2 funtions and use one queue:

  1. to get the frame and process it
  2. to dispaly

the cap variable was inside my process function.

def process:
     cap = cv2.VideoCapture(filename)
     ret, frame = cap.read()

      while ret:
          ret, frame = cap.read()
#detection part in my case I use tensorflow then 
     # end of detection part 
          q.put(result_of_detection)

def Display():
  while True:
    if q.empty() != True:
        frame = q.get()
        cv2.imshow("frame1", frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

if __name__ == '__main__':
#  start threads
p1 = threading.Thread(target=process)
p2 = threading.Thread(target=Display)
p1.start()
p2.start()

it works just fine for me

hope I helped:D

Also, I think this page could may help: https://pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM