简体   繁体   中英

A practical example using uint32_t instead of unsigned int

I am looking for a practical example where I must use uint32_t instead of unsigned int for a desktop application. Can you please provide C++ code and:

  1. Explain the real world scenario; I am really focusing on a practical example here. Nothing too theoretical;
  2. Precise the target architectures + compilers;
  3. Explain why it will work with uint32_t and (most probably) fail with unsigned int ;

My main objective is to be able to easily reproduce it if possible.

Imagine that you wrote a portable program (ie it can be compiled and run on various different kinds of computers), and your program needs to be able to save and load a data file.

For simplicity, we'll say that your program's data file only needs to contain a single integer. So when the user clicks "Save", your program fwrite()'s an unsigned-int into the file. Since you're running on a modern Intel machine in 64-bit mode, the resulting file is eight bytes long (because sizeof(unsigned int)==8 on that platform).

Now you email the data file to your friend for him to use. Your friend loads in the file, but he compiled and ran your program on his old Pentium Pro machine. When he runs your program, it tries to read in a single int, as expected, but on a Pentium Pro, sizeof(unsigned int)==4, so the program only reads in 4 bytes rather than all 8. Now his program is displaying incorrect data, because it only read half of the file. That's no good.

If, on the other hand, you had specified in your save/load code to write/read a uint32_t rather than an unsigned int, you could rest assured that the file would always be 4 bytes long, no matter what architecture the program is compiled for. So by using uint32_t instead of unsigned int you made your data file easier to write and read correctly.

(Note that in real life you'd also want to use htonl() and ntohl() so that your data file would be the same for both Big Endian and Little Endian machines, but that's a separate issue)

unsigned int may not be 32 bits, it's depends on the computer architecture your program runs on.

uint32_t is defined in library, and typedef as 32 bits integer.

The practice is more for portability. Even the target platform doesn't have uint32_t defined, you can easily typedef it to a 32 bits integer type.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM