简体   繁体   中英

Why shifting an int before writing to a file?

I am reversing a source code of a file writer, and here are the methods that read/write ints (or rather ushorts):

        BinaryReader _Reader;
        BinaryWriter _Writer;

        void RawWriteInt16(int value)
        {
            byte a = (byte)(value & 255);
            byte b = (byte)(value >> 8 & 255);
            _Writer.Write(a);
            _Writer.Write(b);
        }
        ushort RawReadUInt16()
        {
            int num = _Reader.ReadByte();
            int num2 = _Reader.ReadByte();
            return (ushort)((num2 << 8) + num);
        }

So, can you explain why the & with 255(11111111) which is always the same and the shifting?

PS I need this for an article and will give you credit. You can check it here if you like, or on codeproject: Sequential-byte-serializer

Thanks for the interest. Credits have been given to @Sentry and @Rotem. I will post also the codeproject url when the article gets approved

That code turns a 16-bit integer value into two (8-bit) byte values:

## byte a = (byte)(value & 255); ##

(int)value:    1101 0010 0011 1000
(int)  255:    0000 0000 1111 1111
 bitwise &:    0000 0000 0011 1000
   to byte:              0011 1000

and then

## byte b = (byte)(value >> 8 & 255); ##

(int)value:    1101 0010 0011 1000
      >> 8:    0000 0000 1101 0010 
(int)  255:    0000 0000 1111 1111
 bitwise &:    0000 0000 1101 0010
   to byte:              1101 0010

So you have two bytes that represent the higher and lower part of a 16-bit int

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM