简体   繁体   中英

Converting CV_32FC1 to CV_16UC1

I am trying to convert a float image that I get from a simulated depth camera to CV_16UC1 . The camera publishes the depth in CV_32FC1 format. I tried many ways but the result was not reasonable.

cv::Mat depth_cv(512, 512, CV_32FC1, depth);
cv::Mat depth_converted;
depth_cv.convertTo(depth_converted,CV_16UC1);

The result is a black image. If I use a scale factor, the image will be white.

I also tried to do it this way:

float depthValueF [512*512];
for (int i=0;i<resolution[1];i++){ // go through the rows (y)
    for (int j=0;j<resolution[0];j++){ // go through the columns (x)
        depthValueOfPixel=depth[i*resolution[0]+j]; // this is location j/i, i.e. x/y
        depthValueF[i*resolution[0]+j] = (depthValueOfPixel) * (65535.0f);
    }
}

It was not successful either.

Try using cv::normalize instead, which will not only convert the image into the proper data type, but it will properly do the scaling for you under the hood.

Therefore:

cv::Mat depth_cv(512, 512, CV_32FC1, depth);
cv::Mat depth_converted;
cv::normalize(depth_cv, depth_converted, 0, 65535, NORM_MINMAX, CV_16UC1);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM