简体   繁体   中英

Size of an integer literal in VC++ and GCC targeting 64-bit machines

I'm writing cross-platform x86/x86-64/Itanium code oriented towards high performance and would like to avoid any unnecessary operations, including unneeded type casting.

The code I have calls a function like this:

int theResult = Function(anUInt, 10);

Obviously, the type of 10 value is not specified. And Function is defined like this:

int Function (unsigned int anUInt, int anInt)
{
    // ...
}

My assumption is that, with VC++ and GCC targeting x86 machines, the size of an integer literal is 4 bytes, so no type casting would need to be performed when assigning 10 value to anInt parameter while entering the function above (the size of an int value is 4 in almost every data model). Some may not consider it as the ultimate proof of this assumption, but sizeof(1) does return 4 in VC++ and GCC code compiled for the x86 architecture. And I suppose that, if the size of an integer literal on a 64-bit machine was say 8 bytes, the 8 bytes of 10 value would have to be converted into the 4 bytes of anInt parameter, slowing down performance if Function is called very frequently (which is the case in my program).

So would the size of an integer literal be 4 in VC++ and GCC code compiled for a 64-bit architecture, like x86-64 or Itanium, or would it be 8? In other words, what would sizeof(1) return in VC++ and GCC code targeting 64-bit machines? Any special cases for the Itanium (IA-64) architecture?

Edit: changed "untyped integer" to "integer literal".

There is no such thing as an "untyped integer" in C++. Although you may not specify the type explicity, it does have a type, and that type can be counted upon. The Standard speaks to this:

2.13.1 Integer Literals

2 The type of an integer literal depends on its form, value, and suffix. If it is decimal and has no suffix, it has the first of these types in which its value can be represented: int, long int; if the value cannot be represented as a long int, the behavior is undefined. If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int, unsigned int, long int, unsigned long int. If it is suffixed by u or U, its type is the first of these types in which its value can be represented: unsigned int, unsigned long int. If it is suffixed by l or L, its type is the first of these types in which its value can be represented: long int, unsigned long int. If it is suffixed by ul, lu, uL, Lu, Ul, lU, UL, or LU, its type is unsigned long int.

The type of an integer literal (eg 10 ) is, by default, int . *

So whether it's 4 or 8 bytes depends on the size of int on your platform (I suspect it's 4 for both of the platforms you mention).

* See the table in section 2.14 of the C++(03) standard; it describes how the types for integer literals are chosen.

It seems that you are doing one of the classic mistakes: premature optimization. It has been called "the root of all evil", by the way ;) If you want to write a fast application, write it, profile it (using valgrind or some such), and then optimize the slow parts. Use known theory of data structures and algorithms to start with, but any other optimizations leave to the compiler. Tell it to inline your function if you are really sure it needs it, but the compiler will probably do it anyway.

Also, you seem confused about how integer sizes work. You compile the program for 32 or 64 bit size, and then you can run both 32 and 64 bit programs on 64-bit architecture. In all cases, an int is an int. If you compiled it for 32 bits, your ints are 4 bytes. If you compiled it for 64 bits, they are 8 bytes. In any case, there is no promotion or demotion at run time for literal integers. There may be 8 bytes storage of 4 byte integers when you run a 32-bit program on 64-bit hardware, but that would only be how the hardware decides to execute your program and is completely invisible to your program and does not cause it to run any slower. Running 32-bit apps on 64-bit hardware & OS does typically incur a penalty of a couple percent, so you should release a dedicated 64-bit build if you really need that performance.

It seems that your question may really be "How many bytes is a int when compiled for 64-bit hardware", since the assumption that it may or may not be 8 bytes is what lead you to worry that there would be performance issues. The answer seems to be: "int" is 4 bytes long on both 32-bit GCC and 64-bit GCC. So, your original concern is moot; the size is the same and thus no promotion or demotion would be needed.

The type casting of an integer literal will occur at compile time, not run time. The compiler should choose the most appropriate instructions for loading the value as it will be used by the rest of the expression.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM