简体   繁体   中英

Why does implicit casting from float to double return a nonsense number in my program?

I'm working on a Lab assignment for my introduction to C programming class and we're learning about casting.

As part of an exercise I had to write this program and explain the casting that happens in each exercise:

#include <stdio.h>

int main(void)
{
  int a = 2, b = 3;
  float f = 2.5;
  double d = -1.2;
  int int_result;
  float real_result;

  // exercise 1
  int_result = a * f;
  printf("%d\n", int_result);

  // exercise 2
  real_result = a * f;
  printf("%f\n", real_result);

  // exercise 3
  real_result = (float) a * b;
  printf("%f\n", real_result);

  // exercise 4
  d = a + b / a * f;
  printf("%d\n", d);

  // exercise 5
  d = f * b / a + a;
  printf("%d\n", d);

  return 0;
}

I get the following output:

5
5.000000
6.000000
1074921472
1075249152

For the last two outputs, the mathematical operations that are conducted result in float values. Since the variable they're being stored in is of the type double, the cast from float to double shouldn't affect the values, should it? But when I print out the value of d , I get garbage numbers as shown in the output.

Could someone please explain?

But when I print out the value of d, I get garbage numbers as shown in the output.

You are using %d as the format instead of %f or %lf . When the format specifier and the argument type don't match, you get undefined behavior.

%d takes an int (and prints it in decimal format).

%f takes a double .

%lf is either an error (C89) or equivalent to %f (since C99).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM