I tried assigning a signed int to unsigned int.
#include <stdio.h>
int main()
{
int a;
unsigned int b;
scanf("%d", &a);
b = a;
printf("%d %u\n", a, b);
return 0;
}
I was hoping that compiling this would cause a warning that I am assigning an int value to unsigned int variable. But I did not get any warning.
$ gcc -std=c99 -Wall -Wextra -pedantic foo.c
$ echo -1 | ./a.out
-1 4294967295
Next I tried to assigning an unsigned int to signed int.
#include <stdio.h>
int main()
{
int a;
unsigned int b;
scanf("%u", &b);
a = b;
printf("%d %u\n", a, b);
return 0;
}
Still no warning.
$ gcc -std=c99 -Wall -Wextra -pedantic bar.c
$ echo 4294967295 | ./a.out
-1 4294967295
Two questions:
Code 1: This conversion is well-defined. If the int
is out of range of unsigned int
, then UINT_MAX + 1
is added to bring it in range.
Since the code is correct and normal, there should be no warning. However you could try the gcc switch -Wconversion
which does produce a warning for some correct conversions, particularly signed-unsigned conversion.
Code 2: This conversion is implementation-defined if the input is larger than INT_MAX
. Most likely the implementation you are on defines it to be the inverse of the conversion in Code 1.
Typically, compilers don't warn for implementation-defined code which is well-defined on that implementation. Again you can use -Wconversion
.
A cast is not necessary and as a general principle, casts should be avoided as they can hide error messages.
This warning is enabled by use the -Wsign-conversion
option with gcc
.
-Wsign-conversion
Warn for implicit conversions that may change the sign of an integer value, like assigning a signed integer expression to an unsigned integer variable. An explicit cast silences the warning. In C, this option is enabled also by -Wconversion.
Signed to unsigned conversion is well defined by the standard, it is just computation modulo UINT_MAX+1
. So you will never see a warning for that.
Unsigned to signed conversion is implementation defined, that is platform dependent. You'd have to look up gcc's documentation to see if and when this is considered erroneous.
And, no, a cast is never helpfull here. Its result in terms of conversion would always be the same, the only thing that you could achieve is switch off warnings, if there were any. In fact there are very few situations where cast are helpful in C, and integer to integer conversion is never among these.
The authors of the C89 Standard noted that the majority of then-current compilers treated signed and unsigned integer math identically outside of a few specific cases, even when the numerical result of a computation would between INT_MAX+1u and UINT_MAX. This is one of the factors that led the rule that short unsigned types should promote to "signed int" rather than "unsigned int". While the Standard didn't require implementations to define behavior in such cases, most of them did so and there seemed to be no reason to believe that trend wouldn't continue.
Unfortunately, the authors of gcc have decided that code which needs to multiply an unsigned char by a positive signed int should be required to cast one of the operands to unsigned before doing the multiply if the result might be in the range INT_MAX+1u to UINT_MAX. If one writes
unsigned multiply(int x, unsigned char y) { return x*y; }
rather than
unsigned multiply(int x, unsigned char y) { return (unsigned)x*y; }
the compiler will usually generate code that works fine for all results up to UINT_MAX, but will sometimes generate code which malfunctions when given such values.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.