简体   繁体   中英

Write in Gstreamer pipeline from opencv in python

I'm trying to stream some images form opencv using gstreamer and I got ome issues with the pipeline. I'm new to gstreamer and opencv in general. I compiled opencv 3.2 with gstreamer for python3 on a raspberry pi 3. I have a little bash script that I use with raspivid

raspivid -fps 25 -h 720 -w 1080 -vf -n -t 0 -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=192.168.1.27 port=5000

I wanted to translate this pipeline in order to use it from opencv and feed into it images that my algorithm manipulates. I did some research and figured that I can use videoWriter with appsrc instead of fdsrc but I get the following error

GStreamer Plugin: Embedded video playback halted; module appsrc0 reported: Internal data flow error.

The python script that I came up with is the following by the way import cv2

cap = cv2.VideoCapture(0)


# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
out = cv2.VideoWriter('appsrc  ! h264parse ! '
                      'rtph264pay config-interval=1 pt=96 ! '
                      'gdppay ! tcpserversink host=192.168.1.27 port=5000 ',
                      fourcc, 20.0, (640, 480))

while cap.isOpened():
    ret, frame = cap.read()
    if ret:
        frame = cv2.flip(frame, 0)

        # write the flipped frame
        out.write(frame)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    else:
        break

# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()

Is there any error in the pipeline? I don't understand the error. I already have a Python client that can read from the bash pipeline and the result are pretty good from the latency perspective and consumed resources .

I came across the solution and I hope this helps other people that come across the same issue. The pipeline was mistakenly arranged and videoconvert was needed. On the other hand the latency was quite relevant but setting speed.preset to ultrafast solved the issue even if there's not much of compression going on there, it was a good compromise. Here's my solution.

import cv2

cap = cv2.VideoCapture(0)

framerate = 25.0

out = cv2.VideoWriter('appsrc ! videoconvert ! '
                      'x264enc noise-reduction=10000 speed-preset=ultrafast tune=zerolatency ! '
                      'rtph264pay config-interval=1 pt=96 !'
                      'tcpserversink host=192.168.1.27 port=5000 sync=false',
                      0, framerate, (640, 480))

while cap.isOpened():
    ret, frame = cap.read()
    if ret:

        out.write(frame)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    else:
        break

# Release everything if job is finished
cap.release()
out.release()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM