简体   繁体   中英

Unexpected behavior from unsigned_int64;

unsigned__int64 difference;
difference=(64*33554432);
printf ("size %I64u \n", difference);
difference=(63*33554432);
printf ("size %I64u \n", difference);

the first # is ridiculously large. The second number is the correct answer. How does changing it from 62 to 63 cause such a change?

First value is 18446744071562067968 Second value is 2113929216

Sorry the values were 64 and 63, not 63 and 62.

Unless qualified otherwise, integer literals are of type int . I would assume that on the platform you're on, an int is 32-bit. So the calculation (64*33554432) overflows and becomes negative. You then cast this to a unsigned __int64 , so this now gets flipped back to a very very large positive integer.

Voila:

int main()
{
    int a1 = (64*33554432);
    int a2 = (63*33554432);

    printf("%08x\n", a1);    // 80000000  (negative)
    printf("%08x\n", a2);    // 7e000000  (positive)

    unsigned __int64 b1 = a1;
    unsigned __int64 b2 = a2;

    printf("%016llx\n", b1); // ffffffff80000000
    printf("%016llx\n", b2); // 000000007e000000
}

On gcc it works fine and gives out the correct number in both cases.

size 2113929216 size 2080374784

Could it be a bug with printf? Are you using MSVC or similar? try stepping it through the debugger and inspect difference after each evaluation. If the numbers look right there then it might just be a printf problem. However, under gcc on linux it's correct.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM