简体   繁体   中英

converting grayscale 8-bit to 32-bit

I have the following code

char c = 0xEE;
int color = 0xFF000000;
color += (int)c;
printf("%x\n", color);    

I expected the result to be 0xFF0000EE; but instead the output was

-> feffffee

What am I missing, i thought simply calculating

(int)(0xFF << 24 + c << 16 + c << 8 + c);

would give 0xFFEEEEEE but i get 0

EDIT:

the following code seems to work:

unsigned char c = 0xEE;
unsigned int color = 0xFF000000; /* full opacity */

color += (unsigned int)c;
color += (unsigned int)c << 8;
color += (unsigned int)c << 16;    
printf("-> %x\n", color); 

    

char can be a signed type or an unsigned type. For you, it's apparently a signed type. You end up assigning -18 , which is ffffffee when extended to 32 bits on a 2's complement machine.

Fixed:

#include <stdio.h>

int main(void) {
   unsigned char c = 0xEE;
   unsigned int color = 0xFF000000;
   color |= c;
   printf("%x\n", color);   
   return 0;
}

Portable:

#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>

int main(void) {
   unsigned char c = 0xEE;
   uint32_t color = 0xFF000000;
   color |= c;
   printf("%" PRIx32 "\n", color);   
   return 0;
}

The following modifications will also result in 0xFF0000EE:

unsigned int c = 0x000000EE;
unsigned int color = 0xFF000000;
color =  c | color;

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM