简体   繁体   中英

How are standard integers from <stdint.h> translated during compilation?

In C, it is common (or at least possible) to target different processor architectures with the same source code. It is also common that the processor architectures define integer sizes differently. To increase code portability and avoid integer size limitations, it is recommended to use the C standard integer header . However, I'm confused on how this is actually implemented.

If I were write a little C program written for x86, and then decide to port that over to an 8 bit microcontroller, how does the microcontroller compiler know how to convert 'uint32_t' to its native integer type?

Is there some mapping requirement when writing C compilers? As in, if your compiler is to be C99 compatible, you need to have a mapping feature that replaces all uint32_t with the native type?

Thanks!

Typically <stdint.h> contains the equivalent of

typedef int int32_t;
typedef unsigned uint32_t;

with actual type choices appropriate for the current machine.

In actuality it's often much more complicated than that, with a multiplicity of extra, subsidiary header files and auxiliary preprocessor macros, but the effect is the same: names like uint32_t end up being true type names, as if defined by typedef .

You asked "if your compiler is to be C99 compatible, you need to have a mapping feature?", and the answer is basically "yes", but the "mapping feature" can just be the particular types the compiler writer chooses in its distributed copy of stdint.h . (To answer your other question, yes, there are at least as many copies of <stdint.h> out there as there are compilers; there's not one master copy or anything.)

One side comment. You said, "To increase code portability and avoid integer size limitations, it is recommended to use the C standard integer header". The real recommendation is that you use that header when you have special requirements, such as for sizes with an exact type . If for some reason you need a signed type of, say, exactly 32 bits, then by all means, use int32_t from stdint.h . But most of the time, you will find that the "plain" types like int and long are perfectly fine. Please don't let anyone tell you that you must pick an exact size for every variable you declare, and use a type name fro stdint.h to declare it with.

The handling of different architectures is most likely implemeted by conditional preprocessing directives such #if or #ifdef . For instance, on GNU/Linux platform it might look as:

# if __WORDSIZE == 64
typedef long int        int64_t;
# else
__extension__
typedef long long int       int64_t;
# endif

There's no magic happening, in the form of "mapping" or "translating": stdint.h simply contains a list of typedef statements. The difference is in the code generator, not the compiler front end.

For an 8-bit target, the code generator will use native instructions for arithmetic on any types which it natively supports (perhaps there is a 16-bit add instruction). For the rest, it will insert calls to library routines to implement the larger data types.

It's not uncommon for the RTL of an 8-bit compiler to contain routines like "long_add", "long_subtract" and so on.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM