简体   繁体   中英

Why does the C standard provide unsized types (int, long long, char vs. int32_t, int64_t, uint8_t etc.)?

Why weren't the contents of stdint.h the standard when it was included in the standard (no int no short no float, but int32_t, int16_t, float32_t etc.)? What advantage did/does ambiguous type sizes provide?

In objective-C, why was it decided that CGFloat , NSInteger , NSUInteger have different sizes on different platforms?

When C was designed, there were computers with different word sizes. Not just multiples of 8, but other sizes like the 18-bit word size on the PDP-7. So sometimes an int was 16 bits, but maybe it was 18 bits, or 32 bits, or some other size entirely. On a Cray-1 an int was 64 bits. As a result, int meant "whatever is convenient for this computer, but at least 16 bits".

That was about forty years ago. Computers have changed, so it certainly looks odd now .

NSInteger is used to denote the computer's word size, since it makes no sense to ask for the 5 billionth element of an array on a 32-bit system, but it makes perfect sense on a 64-bit system.

I can't speak for why CGFloat is a double on 64-bit system. That baffles me.

C is meant to be portable from enbedded devices, over your phone, to descktops, mainfraimes and beyond. These don't have the same base types, eg the later may have uint128_t where others don't. Writing code with fixed width types would severely restrict portability in some cases.

This is why with preference you should neither use uintX_t nor int , long etc but the semantic typedefs such as size_t and ptrdiff_t . These are really the ones that make your code portable.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM