简体   繁体   English

用opencv编写gstreamer源代码

[英]Write gstreamer source with opencv

My goal is, to write a GigEVision to Gstreamer application. 我的目标是编写GigEVision到Gstreamer应用程序。 The first approach was to read the frames via a GigEVision API and then send it via gstreamer as raw RTP/UDP stream. 第一种方法是通过GigEVision API读取帧,然后通过gstreamer将其作为原始RTP / UDP流发送。 This stream can then be received by any gstreamer application. 然后,任何gstreamer应用程序都可以接收此流。 Here is a minimal example for a webcam: https://github.com/tik0/mat2gstreamer 这是一个摄像头的最小示例: https : //github.com/tik0/mat2gstreamer
The drawback of this is, alot of serialization and deserialization when the package is send via UDP to the next application. 这样做的缺点是,当程序包通过UDP发送到下一个应用程序时,会进行大量的序列化和反序列化。

So the question: Is it possible to write a gstreamer source pad easily with opencv, to overcome the drawbacks? 那么问题来了:是否可以用opencv轻松编写gstreamer源代码板以克服这些缺点? (Or do you have any other suggestions?) (或者您还有其他建议吗?)

Greetings 问候

I think I've found the best solution for my given setup (st the data is exchanged between applications on the same PC). 我想我已经找到了给定设置的最佳解决方案(数据是在同一台PC上的应用程序之间交换的)。 Just using the plugin for shared memory allows data exchange with minimal effort. 仅将插件用于共享内存就可以以最小的工作量进行数据交换。 So my OpenCV pileline looks like: 所以我的OpenCV堆线看起来像:

appsrc ! shmsink socket-path=/tmp/foo sync=true wait-for-connection=false

And any other receiver (in this case gstreamer-1.0) looks like: 其他任何接收器(在本例中为gstreamer-1.0)如下所示:

gst-launch-1.0 shmsrc socket-path=/tmp/foo ! video/x-raw, format=BGR ,width=<myWidth>,height=<myHeight>,framerate=<myFps> ! videoconvert ! autovideosink

Works very nice even with multiple access. 即使有多路访问,效果也很好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM