简体   繁体   中英

Why isn't Visual Studio showing a unsigned value as int?

I'm using VS 2012 and I'm reading this book that teaches the bases of C\\C++. The book is quite old and it teaches not in VS, but in Borland C++. There is this lesson that teaches how to convert a unsigned value to int . In the book it says that int numbers have values from -32768 and 32767, and unsigned have from 0 to 65535. In the book it is given the number 42000 and shows that as unsigned is 42000 and as int is -23536. In VS it shows that both as unsigned , and as int is 42000.

#include <stdio.h>
void main(void)
{
    unsigned int value = 42000;

    printf("Show 42000 as unsigned %u\n", value);
    printf("Show 42000 as int %d\n", value);

}

Output:

Show 42000 as unsigned 42000
Show 42000 as int 42000

Side note: tried both %d and %i ; didn't worked.

Your old book gives ranges for 16-bit int and unsigned . The newer compiler uses ether 32-bit or 64-bit.

Burn that book. It was never correct.

The minimum range for an int is -32767 to +32767 since it could be a 1's complement 16 bit number.

The maximum range is not specified.

VS typically uses a 32 bit 2's complement value for an int .

By the way, the behaviour of using %d for an unsigned is undefined . If you write that and compile the code, then the compiler reserves the right to eat your cat. If the book used that construct and claimed it was portable, then burn it again .

More than likely you have a 32 bit wide int which can hold [–2147483648, 2147483647] so you are not overflowing it like the book is trying to example. Try it with a value greater than 2147483647 but smaller than 4294967295 which is the max value a 32 but unsigned integer can hold.

What the book says is true, assuming int is 16 bit. On most modern compilers, it is 32 or 64 bit.

If you want to illustrate this point with the given values, VS supplies __int16 and unsigned __int16 which are integer types guananteed to be 16 bits.

EDIT:

You can also use the standard types int16_t and uint16_t starting with VS 2010 and later. Be sure to #include <stdint.h> to get access to them.

You can determine the acceptable ranges of signed and unsigned integers.

To do this in C you have to include header

#include <limits.h>

and use constants

INT_MAX
INT_MIN
UINT_MAX
UINT_MIN

For example

printf( "INT_MAX = %d\n", INT_MAX );

In C++ you can do the same using header

#include <limits>

For example

std::cout << "The maximnun value of type int is " 
          << std::numeric_limits<int>::max()
          << std::endl;

Borland C++ is very old and has gone through many transitions - now known as Embarcadero's XE Suite. Like it's been said, that book is making assumptions that you're using a 16 bit compiler, so maybe it's worth finding something a bit more up to date :)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM