简体   繁体   中英

C convert from int to char

I have a simple code

char t = (char)(3000);

Then value of t is -72. The hex value of 3000 is 0xBB8. I couldn't understand why the value of t is -72.

Thanks for your answers.


I don't know about Mac. So my result is -72. As I know, MAC is using Big Endian, so does it affect the result? I dont have any MAC computer to test so I want to know from MAC people.

The hex value of 3000 is 0xBB8.

And so the hex value of the char (which, by the way, appears to be signed on your compiler) is 0xB8.

If it were unsigned, 0xB8 would be 184. But since it's signed, its actual value is 256 less, ie -72.

If you want to know why this is, read about two's complement notation.

A char is 8 bits (which can only represent a 0-255 range). Trying to cast 3000 to a char is... impossible impossible, at least for what you are intending.

This is happening because 3000 is too big a value and causes an overflow. Char is generally from -128 to 127 signed, or 0 to 255 unsigned, but it can change depending upon the implementation.

char is an integral type with certain range of representable values. int is also an integral type with certain range of representable values. Normally, range of int is [much] wider than that of char . When you try to squeeze into a char an int value that doesn't fit into the range of char , the value will not "fit", of course. The actual result is implementation-defined.

In your case 3000 is an int value that doesn't' fit into the range of char on your implementation. So, you won't get 3000 as the result. If you really want to know why it specifically came out as -72 - consult the documentation that came with your implementation.

A char is (typically) just 8 bits, so you cant store values as large as 3000 (which would require at least 11 12 bits). So if you trie to store 3000 in a byte, it will just wrap.

Since 3000 is 0xBBA, it requires two bytes, one 0x0B and one which is 0xBA. If you try to store it in a single byte, you will just get one of them (0xBA). And since a byte is (typically) signed, that is -72.

As specified, the 16-bit hex value of 3000 is 0x0BB8 . Although implementation specific , from your posted results this is likely stored in memory in 8-bit pairs as B8 0B (some architectures would store it as 0B B8 . This is known as endianness .)

char , on the other hand, is probably not a 16-bit type. Again, this is implementation specific , but from your posted results it appears to be 8-bits, which is not uncommon.

So while your program has allocated 8-bits of memory for your value, you're storing twice as much information in that memory. When your program retrieves this value later, it will only be pulling the first stored octet, in this case B8 . The 0B will be ignored, and may cause problems later down the line if it ended up overwriting something important. This is known as a buffer overflow , which is very bad.

Assuming two's complement (technically implementation specific , but a reasonable assumption), the hex value of B8 translates to either -72 or 184 in decimal, depending on whether your dealing with a signed or unsigned type. Since you didn't specify either, your compiler will go with it's default. Yet again, this is implementation specific , and it appears your compiler goes with signed char .

Therefore, you get -72 . But don't expect the same results on any other system.

char is used to hold a single character, and you're trying to store a 4-digit int in one. Perhaps you meant to use an array of chars, or string ( char t[4] in this case).

To convert an int to a string (untested):

#include <stdlib.h>

int main() {
    int num = 3000;
    char numString[4];
    itoa(num, buf, 10);
}

哦,我明白了,它是溢出的 ,就像char只是从-256到256或类似的东西我不确定,就像你有一个var类型的最大限制是256并且你加1,它比它变成-256等等

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM