简体   繁体   中英

Convert a byte[] into an Emgu/OpenCV Image

I have a byte array representing a greyscale image that I would like to use with openCV in C#, using the Emgu wrapper. I am trying to figure out how to convert this into an Emu.CV.Image without first converting it to a System.Drawing.Bitmap .

So far, this constructor for Image appears promising. It looks like it takes the pixel rows, columns, and then the array with my data to construct an image. However, it wants them in a weird format and I'm struggling with how to correctly construct the TDepth[,,] data argument.

Here's what I have so far:

// This gets initialized in the constructor and filled in with greyscale image data elsewhere in the code:
byte[] depthPixelData

// Once my depthPixelData is processed, I'm trying to convert it to an Image and this is where I'm having issues
Image<Gray, Byte> depthImage = new Image<Gray, Byte>([depthBitmap.PixelHeight, depthBitmap.pixelWidth, depthPixelData]);

Visual studio is making it obvious to me that just passing in an array isn't going to cut it, but I have no idea how to construct the requisite TDepth[,,] object with my pixel data to pass in to the Image constructor.

This code needs to run at ~30fps, so I'm trying to be as efficient as possible with object creation, memory allocation, etc.

Another solution would be to create an EMGU.CV.Image using just the width and height of the image. Then you can do something like this:

byte[] depthPixelData = new byte[640*480]; // your data

Image<Gray, byte> depthImage = new Image<Gray, byte>(640, 480);

depthImage.Bytes = depthPixelData;

As long as the width and the height are correct and the width is divisible by 4(how Emgu.CV.Image is implemented) there should be no problems. You can even reuse the Emgu.CV.Image object and just change the bytes every frame if you don't need to save the objects.

Personally I would do something along those lines:

byte[] depthPixelData = ...;

int imageWidth = ...;
int imageHeight = ...;
int channelCount = 1; // grayscale

byte[,,] depthPixelData3d = new byte[imageHeight, imageWidth, channelCount];

for(int line = 0, offset = 0; line < imageHeight; line++)
    for(int column = 0; column < imageWidth; column++, offset++)
        depthPixelData3d[line, column, 0] = depthPixelData[offset];

For performance considerations you probably want to:

  • turn this into an unsafe block (should be trivial)
  • allocate your byte[,,] only once (unless your image size changes)

Emu.Cv.Image class defined as

public class Image<TColor, TDepth> : CvArray<TDepth>, ...

TColor
Color type of this image (either Gray, Bgr, Bgra, Hsv, Hls, Lab, Luv, Xyz, Ycc, Rgb or Rbga) TDepth
Depth of this image (either Byte, SByte, Single, double, UInt16, Int16 or Int32)

This generic parameter TDepth is misleading. In your case TDepth[,,] means byte[,,]

To copy one array to another you can use Buffer.BlockCopy:

byte[, ,] imageData = new byte[depthBitmap.PixelHeight , depthBitmap.PixelWidth , colorChannels];
Buffer.BlockCopy(depthPixelData, 0, imageData, 0, imageData.Length);
Image<Gray, Byte> depthImage = new Image<Gray, Byte>(imageData);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM