简体   繁体   中英

Different behavior of double and int64_t conversion on new Apple silicon (arm64) vs. x86_64

env: docker. gcc10 in arm64 debian. gcc7 in x86_64 debian.

#include <cstdio>
#include <limits>
#include <cstdint>

int main(int args, char *argv[]) {
        double d = std::numeric_limits<int64_t>::max();
        int64_t t = static_cast<int64_t>(d);
        printf("%lld\n", t);
        return 0;
}

Output:

  • arm64: 9223372036854775807
  • x86_64: -9223372036854775808

Can someone help me understand why there is a difference?

The answer is the different behavior of the ARM architecture for overflows in the conversion (to int64_t in this case). It is documented here (for ARMv7): https://developer.arm.com/documentation/ddi0403/d/Application-Level-Architecture/Application-Level-Programmers--Model/The-optional-Floating-point-extension/Floating-point-data-types-and-arithmetic?lang=en

TL:DR: The maximum representable value is used on ARM (9223372036854775807 for int64_t). (as @PeterCordes guessed)

EDIT: For x86 / x64 the overflow will result in an integer with just the MSB set (which equals -9223372036854775808 in case of int64_t).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM