简体   繁体   中英

Writing unsigned int to binary file

My first time working with binary files and I'm having clumps of hair in my hands. Anyway, I have the following defined:

unsigned int cols, rows;

Those variables can be anywhere from 1 to about 500. When I get to writing them to a binary file, I'm doing this:

    myFile.write(reinterpret_cast<const char *>(&cols), sizeof(cols));
    myFile.write(reinterpret_cast<const char *>(&rows), sizeof(rows));

When I go back to read the file, on cols = 300 , I get this as result:

44
1
0
0

Can someone please explain to me why I'm getting that result? I can't say that there's something wrong, as I honestly think it's me who don't understand things. What I'd LIKE to do is store the value, as is, in the file so that when I read it back, I get that as well. And maybe I do, I just don't know it.

I'd like some explanation of how this is working and how do I get the data I put in read back.

You have not (yet) shown how you unmarshal the data nor how you printed this text that you've cited. 44 01 00 00 looks like the bytewise decimal representation of each of the little-endian bytes of the the data you've written (decimal "300").

If you read the data back like so, it should give you the effect you want (presuming that you're okay with the limitation that the computer which writes this file is the same endianness as the one which reads it back):

unsigned int colsReadFromFile = 0;
myOtherFile.read(reinterpret_cast<char *>(&colsReadFromFile), sizeof(colsReadFromFile));
if (!myOtherFile)
{
    std::cerr << "Oh noes!" << std::endl;
}

You are simply looking at the four bytes of a 32 bit integer, interpreted on a little-endian platform.

300 base 10 = 0x12C

So little-endianness gives you 0x2C 0x01 , and of course 0x2C=44 .

Each byte in the file has 8 bits, so can represent values from 0 to 255. It's written in little-endian order, with the low byte first. So, starting at the other end, treat the numbers as digits in base 256. The value is 0 * 256^3 + 0 * 256^2 + 1 * 256^1 + 44 * 256^0 (where ^ means exponentiation, not xor).

    300 in binary is 100101100 which is 9 bits long.

But when you say char*, compiler looks for only first 1 byte(8 bits)

    so it is 00101100(bits) of (1 00101100) = 44
                                  ^^^^^^^^

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM