简体   繁体   中英

Hardware Accelerated h264 decoding using ffmpeg, OpenCV

I am working on a Video analytics application where I have to decode an RTSP stream to give IplImage frames which are then fed into my analytics pipeline. Now, the OpenCV VideoCapture structure allows me to extract frames from an RTSP stream(i think it uses ffmpeg to do so) but the performance is not so great. It needs to work in real time.

I also went ahead and wrote my own ffmpeg decoder. But just like OpenCv, performance with RTSP streams is not good. Lots of frames are dropped. However, decoding from a local file works fine.I am still working on refining the code though.

What I need help with is this. First, can I use hardware accelerated decoding here to improve performance? My app is supposed to be cross platform, so I might need to use Directx VA(windows) and VAAPI(linux).If yes,then is there any place where I can learn how to implement hardware acceleration in code especially for ffmpeg decoding of h264?

VideoCapture using ffmpeg backend not support hardware accelerated as I know.

I think you can go for VideoCapture with gstreamer as backend, which you can custom pipeline and enable hardware accelerate via vaapi.

I'm using this pipeline:

rtspsrc location=%s latency=0 ! queue ! rtph264depay ! h264parse ! vaapidecodebin ! videorate ! videoscale ! videoconvert ! video/x-raw,width=640,height=480,framerate=5/1,format=BGR ! appsink

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM