简体   繁体   中英

casting uint32_t to uint64_t results in different value?

Using Visual Studio 2015 C++, 14.0.25431.01 Update 3. I have unexpected behavior in my code. Compile and run with 64bit, Release:

#include <iostream>
#include <stdint.h>

int main(int, char**) {
    for (uint32_t i = 1; i < 3; ++i) {
        uint32_t a = i * 0xfbd1e995;
        uint64_t b = a;

        std::cout << a << " 32bit" << std::endl;
        std::cout << b << " 64bit" << std::endl;
    }
}

I expect that a and b have the same value, but when I run this I get this output:

4224838037 32bit
4224838037 64bit
4154708778 32bit
8449676074 64bit

It looks like the compiler replaces the 32bit multiplication with a 64bit multiplication. Is it allowed to do that, or is this a compiler bug? Both g++ and clang give me the numbers that I'd expect.

EDIT: I've update my code with a simpler version that has the same problem. Also, I've just submitted a bug report .

I could reproduce this on VS2010, and the immediate cause is this:

add ebx, 5BD1E995h  ; this is x
add rdi, 5BD1E995h  ; this is a 64bit version of x

Since it's a 64bit addition, it will just carry into the high 32 bits. This at least makes more sense than conjuring up a 64bit multiplication, it might be a corner case in induction variable elimination but that's just speculation.

Also fun is that it doesn't even save a cast by miscompiling it. The correct value is right there in rbx .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM