简体   繁体   English

Kinect v2 - 同步深度和颜色帧

[英]Kinect v2 - Synchronize depth and color frames

I am currently looking for a stereoscopic camera for a project and the Kinect v2 seems to be a good option.我目前正在为一个项目寻找立体相机,Kinect v2 似乎是一个不错的选择。 However, since it's quite an investment to me, I need to be sure it meets my requirements, the main one being a good synchronization of the different sensors.然而,由于这对我来说是一笔相当大的投资,我需要确保它满足我的要求,主要是不同传感器的良好同步。

Apparently there is no hardware synchronization of the sensors, and I get many versions about the software part:显然传感器没有硬件同步,我得到了很多关于软件部分的版本:

  1. Some posts where people complain about lag between the 2 sensors, and many others asking for a way to synchronize the sensors.在一些帖子中,人们抱怨 2 个传感器之间的滞后,还有许多其他帖子要求提供一种同步传感器的方法。 Both seem to have strange workarounds and no "official", common solution emerges from the answers.两者似乎都有奇怪的解决方法并且没有“官方”,答案中出现了常见的解决方案。

  2. Some posts about a MultiSourceFrame class, which is part of Kinect SDK 2.0.一些关于MultiSourceFrame类的帖子,它是 Kinect SDK 2.0 的一部分。 From what I understand, this class enables you to retrieve the frame of all the sensors (or less, you can choose which sensors you want to get the data from) at a given time.据我了解,此类使您能够在给定时间检索所有传感器的框架(或更少,您可以选择要从中获取数据的传感器)。 Thus, you should be able, for a given instant t, to get the output of the different sensors and make sure these outputs are synchronized.因此,对于给定的时刻 t,您应该能够获得不同传感器的输出并确保这些输出是同步的。

So my question is, is this MultiSourceFrame class doing exactly what I mean it does?所以我的问题是,这个MultiSourceFrame类是否完全符合我的意思? And if yes, why is it never proposed as a solution?如果是,为什么从未提出它作为解决方案? It seems the posts of the 1st category are from 2013, so before the release of the SDK 2.0.第一类的帖子好像是2013年的,所以在SDK 2.0发布之前。 However, MultiSourceFrame class is supposed to replace the AllFramesReady event of the previous versions of the SDK, and AllFramesReady wasn't suggested as a solution either.但是, MultiSourceFrame类应该替换以前版本 SDK 的AllFramesReady事件,并且也不建议将AllFramesReady作为解决方案。

Unfortunately the documentation doesn't provide much information about how it works, so I'm asking here in case someone would have already used it.不幸的是,文档没有提供关于它如何工作的太多信息,所以我在这里问,以防有人已经使用过它。 I'm sorry if my question seems stupid, but I would like to be sure before purchasing such a camera.如果我的问题看起来很愚蠢,我很抱歉,但在购买这样的相机之前,我想确定一下。

Thank you for your answers!谢谢您的回答! And feel free to ask for more details if needed :)如果需要,请随时询问更多详细信息:)

There was a discussion about that in an libfreenect2 issue , where someone specifically mentioned a 6.25 millisecond lag between RGB and depth frame when using the MultiSourceFrameReader :libfreenect2 问题中对此进行了讨论,其中有人特别提到了使用MultiSourceFrameReader时 RGB 和深度帧之间的 6.25 毫秒延迟:

The RelativeTime of the ColorFrame seems to always lags 6.25 or 6.375 ms behind the RelativeTime of the DepthFrame, InfraredFrame, BodyFrame, BodyIndexFrame. ColorFrame 的 RelativeTime 似乎总是比 DepthFrame、InfraredFrame、BodyFrame、BodyIndexFrame 的 RelativeTime 滞后 6.25 或 6.375 毫秒。 Meanwhile, the RelativeTime always matches among DepthFrame, InfraredFrame, BodyFrame, and BodyIndexFrame.同时,RelativeTime 始终匹配 DepthFrame、InfraredFrame、BodyFrame 和 BodyIndexFrame。

In my own experiments, I got the same results.在我自己的实验中,我得到了相同的结果。 But that's only based on the frame's timestamps.但这仅基于帧的时间戳。 These timestamps come from the Kinect v2 device directly, so it's unlikely, but still be possible that they are not 100% correct.这些时间戳直接来自 Kinect v2 设备,因此不太可能,但仍有可能不是 100% 正确。

So, while there is a lag between depth and RGB frames, even when using MultiSourceFrameReader , it's most likely small enough so you can ignore it.因此,虽然深度和 RGB 帧之间存在滞后,但即使使用MultiSourceFrameReader ,它也很可能足够小,因此您可以忽略它。

As for the usage of MultiSourceFrame / MultiSourceFrameReader , it's pretty simple once you got used to the Kinect v2 SDK:至于MultiSourceFrame / MultiSourceFrameReader的使用,一旦你习惯了 Kinect v2 SDK 就非常简单了:

m_pKinectSensor->OpenMultiSourceFrameReader(
    FrameSourceTypes::FrameSourceTypes_Depth | FrameSourceTypes::FrameSourceTypes_Color,
    &m_pMultiSourceFrameReader);

// get "synced" frame
IMultiSourceFrame* pMultiSourceFrame = NULL;
m_pMultiSourceFrameReader->AcquireLatestFrame(&pMultiSourceFrame);

// get depth frame
IDepthFrameReference* pDepthFrameReference = NULL;
pMultiSourceFrame->get_DepthFrameReference(&pDepthFrameReference);
IDepthFrame* pDepthFrame = NULL;
pDepthFrameReference->AcquireFrame(&pDepthFrame);

// get RGB frame
IColorFrameReference* pColorFrameReference = NULL;
pMultiSourceFrame->get_ColorFrameReference(&pColorFrameReference);
IColorFrame* pColorFrame = NULL;
pColorFrameReference->AcquireFrame(&pColorFrame);

// ... now use both frames

You can find more details in the CoordinateMapping Basic sample, once you installed the Kinect v2 SDK.安装 Kinect v2 SDK 后,您可以在CoordinateMapping Basic示例中找到更多详细信息。

I've only used the MS SDK but I figure the rules apply.我只使用了 MS SDK,但我认为规则适用。 The reason why Relative time is the same for all the above streams is because all of the above are created out of the IR frame, thus they are all dependent on it.上述所有流的相对时间之所以相同,是因为以上所有流都是在IR帧之外创建的,因此它们都依赖于它。 The color frame is not as it comes from a different camera.色框不是因为它来自不同的相机。 As for RelativeTime, it's basically a TimeSpan(in C# terms) which describes something akin a delta time between frames in Kinect's own runtime clock.至于RelativeTime,它基本上是一个TimeSpan(在C# 术语中),它描述了类似于Kinect 自己的运行时时钟中帧之间的时间增量的东西。 It's probably created by the Kinect Service which grabs the raw input from the sensor, sends the IR to GPU for expansion into Depth(which is actually an averaging of several frames), Body and BodyFrame(and LongExposureIR) and then gets them back and gets the data back in the CPU to distributed to all the registered listeners(aka different Kinect v2 apps/instances).它可能是由 Kinect 服务创建的,它从传感器获取原始输入,将 IR 发送到 GPU 以扩展到深度(实际上是几帧的平均)、Body 和 BodyFrame(和 LongExposureIR),然后将它们取回并获取返回 CPU 中的数据分发给所有注册的侦听器(也称为不同的 Kinect v2 应用程序/实例)。 Also read in an MSDN forum a reply by an MVP who said MS cautioned them from using RelativeTime for anything other than delta time usage.还在 MSDN 论坛中阅读了一位 MVP 的回复,他说 MS 警告他们不要将 RelativeTime 用于增量时间以外的任何用途。 So I don't know if you can actually use it for manual synchro between separate streams(ie without the use of MultiSourceFrameReader) with confidence.所以我不知道你是否真的可以放心地将它用于不同流之间的手动同步(即不使用 MultiSourceFrameReader)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM