简体   繁体   中英

What is the difference between using INTXX_C macros and performing type cast to literals?

For example this code is broken (I've just fixed it in actual code..).

uint64_t a = 1 << 60

It can be fixed as,

uint64_t a = (uint64_t)1 << 60

but then this passed my brain.

uint64_t a = UINT64_C(1) << 60

I know that UINT64_C(1) is a macro that expands usually as 1ul in 64-bit systems, but then what makes it different than just doing a type cast?

(uint64_t)1 is formally an int value 1 casted to uint64_t
1ul is a constant 1 of type unsigned long which is probably the same as uint64_t on a 64-bit system.
The macro is a portable way to specify the correct suffix for a constant (literal) of type uint64_t .

As you are dealing with constants, all calculations will be done by the compiler and the result is the same.

The suffix appended by the macro ( ul , system specific) can be used for literal constants only.

The cast (uint64_t) can be used for both constant and variable values. With a constant it will have the same effect as the suffix/macro, with a variable of a different type it may perform a truncation or extension of the value, eg fill the higher bits with 0 when changing from 32 bits to 64 bits.

It's a matter of taste if you use UINT64_C(1) or (uint64_t)1 . The macro makes it a bit more clear that you are dealing with a constant.

There is no obvious difference or advantage, these macros are kind of redundant. There are some minor, subtle differences between the cast and the macro:

  • (uintn_t)1 might be cumbersome to use for preprocessor purposes, whereas UINTN_C(1) expands into a single pp token.

  • The resulting type of the UINTN_C is actually uint_leastn_t and not uintn_t . So it is not necessarily the type you expected.

  • Static analysers for coding standards like MISRA-C might moan if you type 1 rather than 1u in your code, since shifting signed integers isn't a brilliant idea regardless of their size.
    (uint64_t)1u is MISRA compliant, UINT64_c(1) might not be, or at least the analyser won't be able to tell since it can't expand pp tokens like a compiler. And UINT64_C(1u) will likely not work, since this macro implementation probably looks something like this:

     #define UINT64_C(n) ((uint_least64_t) n ## ull) // BAD: 1u##ull = 1uull

In general, I would recommend to use an explicit cast. Or better yet wrap all of this inside a named constant:

#define MY_BIT ( (uint64_t)1u << 60 )

UINT64_C(1) produces a single token via token pasting, whereas ((uint64_t)1) is a constant expression with the same value.

They can be used interchangeably in the sample code posted, but not in preprocessor directives such as #if expressions.

XXX_C macros should be used to define constants that can be used in #if expressions. They are only needed if the constant must have a specific type, otherwise just spelling the constant in decimal or hexadecimal without a suffix is sufficient.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM