简体   繁体   中英

Does printf() modify its parameters?

I was trying to see how different languages handle floating point numbers. I know that there are some inherent issues in floating point representation, which is why if you do 0.3 + 0.6 in Python, you get 0.899999 and not 0.9

However, these snippets of code simply left me astounded:

double x = 0.1,
    sum = 0;

for(int i=0; i<10; ++i) 
    sum += x;

printf("%.9lf\n",sum);
assert(sum == 1.0);

The above snippet works fine. It prints 1.0. However, the following snippet gives a runtime error due the assertion failing:

double x = 0.1,
    sum = 0;

for(int i=0; i<10; ++i) 
    sum += x;

assert(sum == 1.0);
printf("%.9lf\n",sum);

The only change in the two snippets above is the order of the assert and printf statements. This leads me to think that printf is somehow modifying its arguments and rounding them off somehow.

Can someone please throw some light on this?

Some processors, such as x86, have floating point registers that are higher precision than the data type (80 bit, compared to 64 bit for a double). Calling printf() causes these registers to be stored on the stack, where there is only 64 bits allocated for the variable. This causes the difference you are observing.

For more information see What every computer scientist should know about floating-point arithmetic.

printf() does not modify its parameters.

I can't imagine asking help for an error and not stating what error you are getting. Do you mean it asserts. Are you sure that both don't assert, but you only see the one that asserts before the printf() for some reason?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM