简体   繁体   中英

Convert 16bit colour to 32bit

I've got an 16bit bitmap image with each colour represented as a single short (2 bytes), I need to display this in a 32bit bitmap context. How can I convert a 2 byte colour to a 4 byte colour in C++?

The input format contains each colour in a single short (2 bytes).

The output format is 32bit RGB. This means each pixel has 3 bytes I believe?

I need to convert the short value into RGB colours.

Excuse my lack of knowledge of colours, this is my first adventure into the world of graphics programming.

Normally a 16-bit pixel is 5 bits of red, 6 bits of green, and 5 bits of blue data. The minimum-error solution (that is, for which the output color is guaranteed to be as close a match to the input colour) is:

red8bit   = (red5bit << 3) | (red5bit >> 2);
green8bit = (green6bit << 2) | (green6bit >> 4);
blue8bit  = (blue5bit << 3) | (blue5bit >> 2);

To see why this solution works, let's look at at a red pixel. Our 5-bit red is some fraction fivebit/31 . We want to translate that into a new fraction eightbit/255 . Some simple arithmetic:

     fivebit   eightbit
     ------- = --------
        31        255

Yields:

     eightbit = fivebit * 8.226

Or closely (note the squiggly ≈):

     eightbit ≈ (fivebit * 8) + (fivebit * 0.25)

That operation is a multiply by 8 and a divide by 4. Owch - both operations that might take forever on your hardware. Lucky thing they're both powers of two and can be converted to shift operations:

     eightbit = (fivebit << 3) | (fivebit >> 2);

The same steps work for green, which has six bits per pixel, but you get an accordingly different answer, of course! The quick way to remember the solution is that you're taking the top bits off of the "short" pixel and adding them on at the bottom to make the "long" pixel. This method works equally well for any data set you need to map up into a higher resolution space. A couple of quick examples:

    five bit space         eight bit space        error
    00000                  00000000                 0%
    11111                  11111111                 0%
    10101                  10101010                0.02%
    00111                  00111001               -1.01%

Common formats include BGR0, RGB0, 0RGB, 0BGR. In the code below I have assumed 0RGB. Changing this is easy, just modify the shift amounts in the last line.

unsigned long rgb16_to_rgb32(unsigned short a)
{
/* 1. Extract the red, green and blue values */

/* from rrrr rggg gggb bbbb */
unsigned long r = (a & 0xF800) >11;
unsigned long g = (a & 0x07E0) >5;
unsigned long b = (a & 0x001F);

/* 2. Convert them to 0-255 range:
There is more than one way. You can just shift them left:
to 00000000 rrrrr000 gggggg00 bbbbb000
r <<= 3;
g <<= 2;
b <<= 3;
But that means your image will be slightly dark and
off-colour as white 0xFFFF will convert to F8,FC,F8
So instead you can scale by multiply and divide: */

r = r * 255 / 31;
g = g * 255 / 63;
b = b * 255 / 31;
/* This ensures 31/31 converts to 255/255 */

/* 3. Construct your 32-bit format (this is 0RGB): */
return (r << 16) | (g << 8) | b;

/* Or for BGR0:
return (r << 8) | (g << 16) | (b << 24);
*/
}

Multiply the three (four, when you have an alpha layer) values by 16 - that's it :)

You have a 16-bit color and want to make it a 32-bit color. This gives you four times four bits, which you want to convert to four times eight bits. You're adding four bits, but you should add them to the right side of the values. To do this, shift them by four bits (multiply by 16). Additionally you could compensate a bit for inaccuracy by adding 8 (you're adding 4 bits, which has the value of 0-15, and you can take the average of 8 to compensate)

Update This only applies to colors that use 4 bits for each channel and have an alpha channel.

I'm here long after the fight, but I actually had the same problem with ARGB color instead , and none of the answers are truly right: Keep in mind that this answer gives a response for a slightly different situation where we want to do this conversion:

AAAARRRRGGGGBBBB >>= AAAAAAAARRRRRRRRGGGGGGGGBBBBBBBB

If you want to keep the same ratio of your color, you simply have to do a cross-multiplication: You want to convert a value x between 0 and 15 to a value between 0 and 255: therefore you want: y = 255 * x / 15 .

However, 255 = 15 * 17, which itself, is 16 + 1: you now have y = 16 * x + x Which is actually the same as doing a for bits shift to the left and then adding the value again (or more visually, duplicating the value: 0b1101 becomes 0b11011101 ).

Now that you have this, you can compute your whole number by doing:

  a = v & 0b1111000000000000
  r = v & 0b111100000000
  g = v & 0b11110000
  b = v & 0b1111
  return b | b << 4 | g << 4 | g << 8 | r << 8 | r << 12 | a << 12 | a << 16

Moreover, as the lower bits wont have much effect on the final color and if exactitude isnt necessary, you can gain some performances by simply multiplying each component by 16:

  return b << 4 | g << 8 | r << 12 | a << 16

(All the left shifts values are strange because we did not bother doing a right shift before)

There some questions about the model like is it HSV, RGB? If you wanna ready, fire, aim I'd try this first.

#include <stdint.h>

uint32_t convert(uint16_t _pixel) 
{
    uint32_t pixel;
    pixel = (uint32_t)_pixel;
    return ((pixel & 0xF000) << 16)
         | ((pixel & 0x0F00) << 12)
         | ((pixel & 0x00F0) << 8)
         | ((pixel & 0x000F) << 4);
}

This maps 0xRGBA -> 0xRRGGBBAA, or possibly 0xHSVA -> 0xHHSSVVAA, but it won't do 0xHSVA -> 0xRRGGBBAA.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM