简体   繁体   中英

Kinect depth detection

I know how to do it in WPF but I have problem for capturing depth in winforms application.

I found some code as below:

private void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
 {
     using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
    {
        if (depthFrame != null)
        {
            Bitmap DepthBitmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
            if (_depthPixels.Length != depthFrame.PixelDataLength)
            {
                _depthPixels = new DepthImagePixel[depthFrame.PixelDataLength];
                _mappedDepthLocations = new ColorImagePoint[depthFrame.PixelDataLength];
            }

            //Copy the depth frame data onto the bitmap  
            var _pixelData = new short[depthFrame.PixelDataLength];
            depthFrame.CopyPixelDataTo(_pixelData);
            BitmapData bmapdata = DepthBitmap.LockBits(new Rectangle(0, 0, depthFrame.Width,
            depthFrame.Height), ImageLockMode.WriteOnly, DepthBitmap.PixelFormat);
            IntPtr ptr = bmapdata.Scan0;
            Marshal.Copy(_pixelData, 0, ptr, depthFrame.Width * depthFrame.Height);
            DepthBitmap.UnlockBits(bmapdata);                       
            pictureBox2.Image = DepthBitmap;
        }              
    }
  }

but this is not giving me the greyScale depth and it's purple. Any improvement or help?

I found the solution myself, by a function to convert the depth frame:

void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
    {             
            using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
            {
                if (depthFrame != null)
                {                                              
                    this.depthFrame32 = new byte[depthFrame.Width * depthFrame.Height * 4];
                    //Update the image to the new format
                    this.depthPixelData = new short[depthFrame.PixelDataLength];
                    depthFrame.CopyPixelDataTo(this.depthPixelData);
                    byte[] convertedDepthBits = this.ConvertDepthFrame(this.depthPixelData, ((KinectSensor)sender).DepthStream);
                    Bitmap bmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
                    BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, depthFrame.Width, depthFrame.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
                    IntPtr ptr = bmapdata.Scan0;
                    Marshal.Copy(convertedDepthBits, 0, ptr, 4 * depthFrame.PixelDataLength);
                    bmap.UnlockBits(bmapdata);
                    pictureBox2.Image = bmap;


                }
            }
        }     

  private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream)
    {
        //Run through the depth frame making the correlation between the two arrays
        for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < this.depthFrame32.Length; i16++, i32 += 4)
        {
            // Console.WriteLine(i16 + "," + i32);
            //We don’t care about player’s information here, so we are just going to rule it out by shifting the value.
            int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth;
            //We are left with 13 bits of depth information that we need to convert into an 8 bit number for each pixel.
            //There are hundreds of ways to do this. This is just the simplest one.
            //Lets create a byte variable called Distance.
            //We will assign this variable a number that will come from the conversion of those 13 bits.
            byte Distance = 0;
            //XBox Kinects (default) are limited between 800mm and 4096mm.
            int MinimumDistance = 800;
            int MaximumDistance = 4096;
            //XBox Kinects (default) are not reliable closer to 800mm, so let’s take those useless measurements out.
            //If the distance on this pixel is bigger than 800mm, we will paint it in its equivalent gray
            if (realDepth > MinimumDistance)
            {
                //Convert the realDepth into the 0 to 255 range for our actual distance.
                //Use only one of the following Distance assignments
                //White = Far
                //Black = Close
                //Distance = (byte)(((realDepth – MinimumDistance) * 255 / (MaximumDistance-MinimumDistance)));
                //White = Close
                //Black = Far
                Distance = (byte)(255 - ((realDepth - MinimumDistance) * 255 / (MaximumDistance - MinimumDistance)));
                //Use the distance to paint each layer (R G &  of the current pixel.
                //Painting R, G and B with the same color will make it go from black to gray
                this.depthFrame32[i32 + RedIndex] = (byte)(Distance);
                this.depthFrame32[i32 + GreenIndex] = (byte)(Distance);
                this.depthFrame32[i32 + BlueIndex] = (byte)(Distance);
            }
            //If we are closer than 800mm, the just paint it red so we know this pixel is not giving a good value
            else
            {
                this.depthFrame32[i32 + RedIndex] = 0;
                this.depthFrame32[i32 + GreenIndex] = 0;
                this.depthFrame32[i32 + BlueIndex] = 0;
            }
        }

so i presume that rgb frame is working out for you in that case:

first to enable depth camera you need to call:

sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH|all stuff you use also);

second to start streaming you need to call:

if (int(streams&_Kinect_zed)) ret=sensor->NuiImageStreamOpen(
    NUI_IMAGE_TYPE_DEPTH,                   // Depth camera or rgb camera?
    NUI_IMAGE_RESOLUTION_640x480,           // Image resolution
    NUI_IMAGE_STREAM_FLAG_DISTINCT_OVERFLOW_DEPTH_VALUES, // Image stream flags // NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE nefunguje !!!
    2,                                      // Number of frames to buffer
    NULL,                                   // Event handle
    &stream_hzed); else stream_hzed=NULL;
  • beware not all resolution/flags combinations work on all models of kinect !!!
  • this one above is safe even for the older models like mine

this is how i capture frame (called repeatedly from timer or thread loop)

ret=sensor->NuiImageStreamGetNextFrame(stream_hzed,0,&imageFrame); if (ret>=0)
    {
    // copy data from frame
    imageFrame.pFrameTexture->LockRect(0, &LockedRect, NULL, 0);
    if (LockedRect.Pitch!=0)
        {
        const BYTE* curr = (const BYTE*) LockedRect.pBits;
        union _col { BYTE u8[2]; WORD u16; } col;
        col.u16=0;
        pnt3d p;
        long ax,ay;
        float mxs=float(xs)/(62.0*deg),mys=float(ys)/(48.6*deg);
        for(int x=0,y=0;;)
            {
            col.u8[0]=*curr; curr++;
            col.u8[1]=*curr; curr++;
            p.raw=col.u16;
            p.rgb=&rgb_default;
                 if (p.raw==0x0000) p.z=0.0;    // p.z je kolma vzdialenost od senzora (kinect to correctuje sam)
            else if (p.raw>=0x8000) p.z=4.0;
            else p.z=0.8+(float(p.raw-6576)*0.00012115165336374002280501710376283);
            // depth FOV correction
            p.x=zx[x]*p.z;
            p.y=zy[y]*p.z;
            // color FOV correction     zed 58.5° x 45.6° | rgb 62.0° x 48.6° | 25mm distance
            if (p.z>0.0)
                {
                ax=(((x+10-xs2)*241)>>8)+xs2;   // cameras x-offset and different FOV
                ay=(((y+30-ys2)*240)>>8)+ys2;   // cameras y-offset??? and different FOV
                if ((ax>=0)&&(ax<xs))
                if ((ay>=0)&&(ay<ys)) p.rgb=&rgb[ay][ax];
                }
            xyz[y][x]=p;
            x++; if (x>=xs) { x=0; y++; if (y>=ys) break; }
            }
        }
    // release frame
    imageFrame.pFrameTexture->UnlockRect(0);
    ret=sensor->NuiImageStreamReleaseFrame(stream_hzed, &imageFrame);
    stream_changed|=_Kinect_zed;
    }

Sorry for incomplete source code ... - all is copy pasted from my kinect class (BDS2006 Turbo C++) - so you need to check your code if you do not forget something - and if yes then transform my code to C# (i am not C# user) - most likely you forget to NUIinitialize with depth flag - or set invalid resolution/flags/ precision or framerate for your HW

if nothing work at all then you need to initialize the sensor in the first place

int sensors;
INuiSensor *sensor;
if ((NUIGetSensorCount(&sensors)<0)||(sensors<1)) return false;
if (NUICreateSensorByIndex(0,&sensor)<0) return false;

if you link to dll on your own then link only these functions:

typedef HRESULT(__stdcall *_NuiGetSensorCount     )(int * pCount);                        _NuiGetSensorCount      NUIGetSensorCount     =NULL;
typedef HRESULT(__stdcall *_NuiCreateSensorByIndex)(int index,INuiSensor **ppNuiSensor);  _NuiCreateSensorByIndex NUICreateSensorByIndex=NULL;
  • Every other function (must) is obtained via COM inside SDK headers !!!
  • if you link and use them on your own then you will not be connected to your physical Kinect !!!

Basically kinect sdk is developed for WPf application. In windows form you have convert the short array of the depth data to the BItmap to display it on picturebox. And based on my expriment WPF is better for programming with kinect.

Below is the function that I used to convert depth frame to Bitmap for showing in picture box.

 private Bitmap ImageToBitmap(DepthImageFrame Image)
    {
        short[] pixeldata = new short[Image.PixelDataLength];
        int stride = Image.Width * 2;
        Image.CopyPixelDataTo(pixeldata);
        Bitmap bmap = new Bitmap(Image.Width, Image.Height, PixelFormat.Format16bppRgb555);
        BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, Image.Width, Image.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
        IntPtr ptr = bmapdata.Scan0;
        Marshal.Copy(pixeldata, 0, ptr, Image.PixelDataLength);
        bmap.UnlockBits(bmapdata);
        return bmap;
    }

You may call it like this:

DepthImageFrame VFrame = e.OpenDepthImageFrame();
            if (VFrame == null) return;
            short[] pixelS = new short[VFrame.PixelDataLength];
            Bitmap bmap = ImageToBitmap(VFrame);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM