简体   繁体   中英

Why 16-bit compiler gives error for unsingned char[] declaration?

Here is code:

unsigned char A[] = { 'a', 'b', 12, 256, 'c', 28 };

it compiles well in VS using x64 compiler. But 16-bit compiler gives some error and unfortunately I don't know what kind of error. The question is why 16-bit gives error in this case. Can you explain it?

A char is always a byte, so whether your platform is 16-bit or 64-bit words doesn't really matter (though if you're using a system with CHAR_BIT != 8 , we'll talk!). What's probably of more consequence is that your 16-bit compiler (yes, I'm assuming Turbo C++) is from the 1980s, a decade before the first standard edition of C++, so it behaves a bit differently overall.

In this case, it is less tolerant of the value 256 , which is actually larger than can be stored in a char (signed or otherwise). I'd say it's "wrong", but it's hard to be non-compliant to a standard that didn't exist at the time. Turbo C++ is pretty much free to do its own thing, in that sense — it's not actually C++, in the way that we understand the term "C++" today.

I would expect your Visual Studio compiler to emit a compiler warning... and then initialise the unsigned char using wraparound , as that's how unsigned values work.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM