简体   繁体   中英

what is the difference between short signed int and signed int

我在c上引用了一个教程,我发现signed int和short signed int范围是-32768到32767,它是2个字节,是它们的任何区别,如果不是那么为什么使用两种声明。

它是特定于平台的 - 在此上下文中您可以确定的是sizeof(int) >= sizeof(short) >= 16 bits

The best answer to your question can be found in the ANSI standard for C, section 2.2.4.2 - Numerical Limits . I reproduce the relevant parts of that section here for your convenience:

2.2.4.2 Numerical limits

A conforming implementation shall document all the limits specified in this section, which shall be specified in the headers and .

"Sizes of integral types "

The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.

  • maximum number of bits for smallest object that is not a bit-field (byte) CHAR_BIT 8

  • minimum value for an object of type signed char SCHAR_MIN
    -127

  • maximum value for an object of type signed char SCHAR_MAX
    +127

  • maximum value for an object of type unsigned char UCHAR_MAX
    255

  • minimum value for an object of type char CHAR_MIN see below

  • maximum value for an object of type char CHAR_MAX see below

  • maximum number of bytes in a multibyte character, for any supported locale MB_LEN_MAX
    1

  • minimum value for an object of type short int SHRT_MIN
    -32767

  • maximum value for an object of type short int SHRT_MAX
    +32767

  • maximum value for an object of type unsigned short int USHRT_MAX
    65535

  • minimum value for an object of type int INT_MIN
    -32767

  • maximum value for an object of type int INT_MAX
    +32767

  • maximum value for an object of type unsigned int UINT_MAX
    65535

  • minimum value for an object of type long int LONG_MIN
    -2147483647

  • maximum value for an object of type long int LONG_MAX
    +2147483647

  • maximum value for an object of type unsigned long int ULONG_MAX
    4294967295

The not so widely implemented C99 adds the following numeric types:

  • minimum value for an object of type long long int LLONG_MIN -9223372036854775807 // -(263 - 1)
  • maximum value for an object of type long long int LLONG_MAX +9223372036854775807 // 263 - 1
  • maximum value for an object of type unsigned long long int ULLONG_MAX 18446744073709551615 // 264 - 1

A couple of other answers have correctly quoted the C standard, which places minimum ranges on the types. However, as you can see, those minimum ranges are identical for short int and int - so the question remains: Why are short int and int distinct? When should I choose one over the other?

The reason that int is provided is to provide a type that is intended to match the "most efficient" integer type on the hardware in question (that still meets the minimum required range). int is what you should use in C as your general purpose small integer type - it should be your default choice.

If you know that you'll need more range than -32767 to 32767, you should instead choose long int or long long int . If you are storing a large number of small integers, such that space efficiency is more important than calculation efficiency, then you can instead choose short (or even signed char , if you know that your values will fit into the -127 to 127 range).

C and C++ only make minimum size guarantees on their objects. There is no exact size guarantee that is made. You cannot rely on type short being exactly 2 bytes, only that it can hold values in the specified range (so it is at least two bytes). Type int is at least as large as short and is often larger. Note that signed int is a long-winded way to say int while signed short int is a long-winded way to say short int which is a long-winded way to say short . With the exception of type char (which some compilers will make unsigned), all the builtin integral types are signed by default. The types short int and long int are longer ways to say short and long , respectively.

A signed int is at least as large as a short signed int . On most modern hardware a short int is 2 bytes (as you saw), and a regular int is 4 bytes. Older architectures generally had a 2-byte int which may have been the cause of your confusion.

There is also a long int which is usually either 4 or 8 bytes, depending on the compiler.

Please read following expalination for signed char then we will talk about signed/unsigned int.

First I want to prepare background for your question.
................................................

char data type is of two types:

unsigned char;

signed char;

(ie INTEGRAL DATATYPES)

.................................................

Exaplained as per different books as:
char 1byte –128 to 127 (ie by default signed char)

signed char 1byte –128 to 127

unsigned char 1byte 0 to 255

.................................................

one more thing 1byte=8 bits.(zero to 7th bit)

As processor flag register reserves 7th bit for representing sign(ie 1=+ve & 0=-ve)


-37 will be represented as 1101 1011 (the most significant bit is 1),

+37 will be represented as 0010 0101 (the most significant bit is 0).


.................................................

similarly for char last bit is by default taken as signed


This is why?

Because char also depends on ASCII codes of perticular charectors(Eg.A=65).


In any case we are using char and using 7 bits only.

In this case to increase memory range for char/int by 1 bit we use unsigned char or unsigned int;

Thanks for the question.

similarly for 4bit int or 2bit int we need signed int & unsigned int

It depends on the platform.

Int is 32-bit wide on a 32-bit system and 64 bit wide on a 64-bit system(i am sure that this is ever the case).

I was referring a tutorial on c,I found that signed int & short signed int range are -32768 to 32767 and it's of 2 bytes.

That's a very old tutorial. The modern C standard is as per Paul R's answer. On a 32 bit architecture, normally:

    short int is 16 bits
          int is 32 bits
     long int is 32 bits
long long int is 64 bits

the size of an int would normally only be 16 bits on a 16 bit machine. 16 bit machines are presumably limited to embedded devices these days.

On a 16 bit machine, sizes amay be like this:

    short int is 16 bits
          int is 16 bits
     long int is 32 bits
long long int is 64 bits

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM