简体   繁体   中英

Why is (int64_t)-1 + (uint32_t)0 signed?

Why is (int64_t)-1 + (uint32_t)0 signed in C? It looks like it's int64_t , but my intuition would say uint64_t .

FYI When I run

#include <stdint.h>
#include <stdio.h>

#define BIT_SIZE(x) (sizeof(x) * 8)
#define IS_UNSIGNED(x) ((unsigned)(((x) * 0 - 1) >> (BIT_SIZE(x) - 1)) < 2)
#define DUMP(x) dump(#x, IS_UNSIGNED(x), BIT_SIZE(x))

static void dump(const char *x_str, int is_unsigned, int bit_size) {
  printf("%s is %sint%d_t\n", x_str, "u" + !is_unsigned, bit_size);
}

int main(int argc, char **argv) {
  (void)argc; (void)argv;
  DUMP(42);
  DUMP(42U);
  DUMP(42L);
  DUMP(42UL);
  DUMP(42LL);
  DUMP(42ULL);
  DUMP('x');
  DUMP((char)'x');
  DUMP(1 + 2U);
  DUMP(1 << 2U);
  DUMP((int32_t)-1 + (uint64_t)0);
  DUMP((int64_t)-1 + (uint32_t)0);
  return 0;
}

I get the following output:

42 is int32_t
42U is uint32_t
42L is int32_t
42UL is uint32_t
42LL is int64_t
42ULL is uint64_t
'x' is int32_t
(char)'x' is int8_t
1 + 2U is uint32_t
1 << 2U is int32_t
(int32_t)-1 + (uint64_t)0 is uint64_t
(int64_t)-1 + (uint32_t)0 is int64_t

Why is (int64_t)-1 + (uint32_t)0 signed?

Because int64_t conversion rank is greater than uin32_t conversion rank. (uint32_t)0 is converted to int64_t in the + expression and int64_t is the type of the resulting expression.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM