简体   繁体   English

如何使用Matlab或C#在Kinect V2中将颜色流(1920x1080)转换为深度流(512x424)

[英]How can I convert color stream (1920x1080) into depth stream(512x424) in Kinect V2 using matlab or C#

Kinect V2 color stream supported format is : 1920x1080. Kinect V2颜色流支持的格式是:1920x1080。 But kinect V2 depth stream format is : 512x424. 但kinect V2深度流格式为:512x424。 Now when I start live steam for both sensors then they have different sizes because of different resolution. 现在,当我为两个传感器开始实时运行时,由于分辨率不同,它们的尺寸也有所不同。 I cant resize them, because I need coordinates . 我无法调整它们的大小,因为我需要坐标。 so when I resize using Imresize(),the coordinates are not matched. 所以当我使用Imresize()调整大小时,坐标不匹配。 I already read matlab documentation.They said hardware only supports this two format respectively.Now How can i do this in code so that both stream have the same resolution. 我已经阅读过matlab文档,他们说硬件只支持这两种格式,现在我该如何在代码中做到这两个流具有相同的分辨率。 I tried two days long but failed.Moreover, I want to do it by any process so that i take first depth image and based on this depth resolution it will take RGB or color image. 我尝试了两天,但是失败了。此外,我想通过任何方法进行操作,以便拍摄第一张深度图像,并基于此深度分辨率拍摄RGB或彩色图像。

My project is I take the line from depth image and map them on RGB image of kinect v2. 我的项目是从深度图像中提取线条并将其映射到kinect v2的RGB图像上。 but there resolution are not same. 但是那里的分辨率不一样。 so [x,y] cordinates changed. 因此[x,y]坐标发生了变化。 so when I map it on RGB it not matched with the coordintes of depth image. 因此,当我将其映射到RGB时,它与深度图像的坐标不匹配。 how can i solve it ?. 我该如何解决? I thought i will change the resolution but in kinect V2 resoution cant change.Now how can i do it in coding. 我以为我会改变分辨率,但在kinect V2中无法更改分辨率。现在我该如何在编码中做到这一点。

Here is link who did like this.i want to do it in matlab or c# 这是做过这样的链接。我想在Matlab或C#中做到这一点

In c# you can use the CoordinateMapper to map points from one space to another. 在c#中,您可以使用CoordinateMapper将点从一个空间映射到另一个空间。 So to map from depth space to color space you hook up to the MultiSourceFrameArrived event for color and depth source and create a handler like this 因此,要从深度空间映射到颜色空间,您需要连接到颜色和深度源的MultiSourceFrameArrived事件,并创建一个像这样的处理程序

  private void MultiFrameReader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
  {
        MultiSourceFrame multiSourceFrame = e.FrameReference.AcquireFrame();
        if (multiSourceFrame == null)
        {
            return;
        }


        using (ColorFrame colorFrame = multiSourceFrame.ColorFrameReference.AcquireFrame())
        {
            if (colorFrame == null) return;

            using (DepthFrame depthFrame = multiSourceFrame.DepthFrameReference.AcquireFrame())
            {
                if (colorFrame == null) return;

                using (KinectBuffer buffer = depthFrame.LockImageBuffer())
                {
                    ColorSpacePoint[] colorspacePoints = new ColorSpacePoint[depthFrame.FrameDescription.Width * depthFrame.FrameDescription.Height];
                    kinectSensor.CoordinateMapper.MapDepthFrameToColorSpaceUsingIntPtr(buffer.UnderlyingBuffer, buffer.Size, colorspacePoints);
                    //A depth point that we want the corresponding color point
                    DepthSpacePoint depthPoint = new DepthSpacePoint() { X=250, Y=250};

                    //The corrseponding color point
                    ColorSpacePoint targetPoint = colorspacePoints[(int)(depthPoint.Y * depthFrame.FrameDescription.Height + depthPoint.X)];

                }
            }
        }  
    }

The colorspacePoints array contains for each pixel in the depthFrame the corresponding point in the colorFrame You should also check if the targetPoint has X or Y infinity, that means that there is no corresponding pixel in the target space colorspacePoints数组为depthFrame中的每个像素包含colorFrame中的对应点。您还应该检查targetPoint是否具有X或Y无穷大,这意味着目标空间中没有对应的像素

For working example, you can check VRInteraction . 对于工作示例,您可以检查VRInteraction I map depth image to RGB image to build up 3D point cloud. 我将深度图像映射到RGB图像以建立3D点云。

What you want to achieved is called Registration . 您想要实现的目标称为注册

  1. Calibrate Depth camera to find the Depth camera projection matrix (Using opencv) 校准深度相机以找到深度相机投影矩阵(使用opencv)
  2. Calibrate RGB camera to find the RGB camera projection matrix (Using opencv) 校准RGB相机以找到RGB相机投影矩阵(使用opencv)

    - You can register Depth image to RGB image: -您可以将深度图像注册为RGB图像:

Which is mapping the corresponding RGB pixel of the given Depth image. 它映射给定深度图像的相应RGB像素。 This will end up with a resolution of 1920x1080 RGB-Depth image. 最终将获得1920x1080 RGB深度图像的分辨率。 Not all the RGB pixels will have a depth value since there are less depth pixels. 由于深度像素较少,因此并非所有RGB像素都具有深度值。 For this you need to 为此,您需要

  • calculate real world ordinates() of each depth pixel using Depth camera projection matrix 使用深度相机投影矩阵计算每个深度像素的真实世界ordinates()
  • calculate the coordinates of the RGB pixel of that previosly calculated real world ordinates 计算预先计算的真实世界坐标的RGB像素的坐标
  • Find the matching pixel in the RGB image using previously calculated coordinates of the RGB pixel 使用先前计算的RGB像素坐标找到RGB图像中的匹配像素

    - You can register RGB image image to Depth image: -您可以将RGB图像图像注册到深度图像:

Which is mapping the corresponding Depth pixel of the given RGB image. 这将映射给定RGB图像的相应深度像素。 This will end up with a resolution of 512x424 RGB-Depth image. 最终将获得512x424 RGB深度图像的分辨率。 For this you need to 为此,您需要

  • calculate real world ordinates() of each RGB pixel using Depth camera projection matrix 使用深度相机投影矩阵计算每个RGB像素的真实世界ordinates()
  • calculate the coordinates of the depth pixel of that previously calculated real world ordinates 计算先前计算的真实世界坐标的深度像素的坐标
  • Find the matching pixel in the depth image using previously calculated coordinates of the RGB pixel 使用先前计算的RGB像素坐标在深度图像中找到匹配的像素

If you want to achieved this in real-time, you will need to consider about using GPU accelerating. 如果要实时实现这一目标,则需要考虑使用GPU加速。 Specially if your Depth image contains more than 30000 depth points. 特别是如果您的“深度”图像包含超过30000个深度点。

I wrote my masters theses on this matter. 我就此事写了硕士论文。 if you have more questions, I'm more than happy to help you. 如果您还有其他问题,我们非常乐意为您提供帮助。

You will need to resample ( imresize in Matlab) if you want to overlay both arrays (eg to create an RGBD image). 您将需要重新取样( imresize在Matlab)如果要覆盖两个阵列(例如创建RGBD图像)。 Note that the field of view is different on depth and color, ie the far right and left of the color image is not part of the depth image, and the top and bottom of the depth image are not part of the color image. 注意,视场在深度和颜色上是不同的,即,彩色图像的最右边和最左边不是深度图像的一部分,深度图像的顶部和底部不是彩色图像的一部分。

Consequently, you should 因此,您应该

  1. crop color image in width to depth image 从宽到深图像裁剪彩色图像
  2. crop depth image in height to color image 将高度的深度图像裁剪为彩色图像
  3. resample either color or depth image using imresize 使用imresize重新采样彩色或深度图像

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM