简体   繁体   中英

How does the standard C preprocessor interpret “0xFFFFFFFF”?

How does the standard C preprocessor interpret "0xFFFFFFFF": 4G-1 or -1?

Same question for "~0"...

If the answer is -1, then how do I make it interpret it as 4G-1 (ie, is there any other way besides explicitly using 4294967295)?

PS: I tried it on MS Visual Studio (using a comparison rather than calling 'printf' of course, as the latter would simply print according to the specified '%'), and the answer was 4G-1. But I'm not sure that MS Visual Studio uses a standard C preprocessor.

Thank you

As far as the preprocessor is concerned, 0xFFFFFFFF is just a hexadecimal constant. Numbers in preprocessor expressions (which are relevant only in #if and #elif directives) are taken to be of the widest available integer type; the preprocessor will treat 0xFFFFFFFF as a signed integer constant with the value 2 32 -1, or 4294967295 (since, as of C99, there is always an integer type of at least 64 bits).

If it appears anywhere other than a #if or #elif directive, then the preprocessor is irrelevant. A hexadecimal constant's type is the first of:

  • int
  • unsigned int
  • long int
  • unsigned long int
  • long long int
  • unsigned long long int

For this particular constant, there are several possibilities:

  • If int is narrower than 32 bits and long is wider than 32 bits, then the type is long ;
  • If int is narrower than 32 bits and long is exactly 32 bits, then the type is unsigned long ;
  • If int is 32 bits, then the type is unsigned int ;
  • If int is wider than 32 bits, then the type is int .

On modern systems, unsigned int and unsigned long are the most likely possibilities.

In all cases, the value of 0xFFFFFFFF is exactly 2 32 -1, or 4294967295 ; it never has a negative value.

However, you can easily get a negative value (say, -1 ) by converting (either explicitly or implicitly) the value of 0xFFFFFFFF to a signed type:

int n = 0xFFFFFFFF;

But this is not portable. If int is wider than 32 bits, the stored value will be 2 32 -1. And even if int is exactly 32 bits, the result of converting an unsigned value to a signed type is implementation-defined; -1 is a common result, but it's not guaranteed.

As for ~0 , that's an int expression whose value has all its bits set to 1 -- which is usually -1 , but that's not guaranteed.

What exactly are you trying to do?

Per C 2011 (N1570) 6.10.1 4, integer expressions evaluated in the preprocessor use the widest types in the implementation ( intmax_t and uintmax_t ). 0xFFFFFFFF will have the value 2 32 -1 since each C implementation must support that value as an unsigned long . ~0 will not have that value in any normal C implementation.

Expressions are evaluated in The preprocessor only for #if and #elif statements. Text in your question suggests you are trying to print some expression resulting from preprocessor evaluation. That will not happen. Constant expressions in the source text outside of #if and #elif statements are evaluated by the regular C rules, not by the preprocessor.

ANSI C makes very few guarantees about the size of various core data types, so relying on cpp to interpret the value above one way or another portably is a mistake. If pressed, consider wrapping it in checks:

#if 0xFFFFFFFF == -1
...
#else
...
#endif

The preprocessor does not interpret numbers until it is forced to. Writing

 #define N  0xffffffff

is simple text substitution wherever N is used, except for preprocessor #if evaluation. What C does with the value is far more likely to be what you want to ask. For example,

 long number = N;  // declare and initialize to symbolic value N

This may or may not cause a compilation warning, or maybe an error depending on the size of a long and how flexibly the compiler converts initialization constants.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM