简体   繁体   中英

Write() and Read() raw bytes from NetworkStream, data is difference at some bytes

I've written some code for sending byte[] array using NetworkStream with KNOWN size before sending, but the data sent and data received are difference at some positions.

MAXSIZE is the known size of the data I want to send.

    public static void SendBytes(TcpClient clientSocket, byte[] outStream)
    {
        Debug.WriteLine("SendBytes() number of bytes: " + outStream.Length.ToString());

        NetworkStream serverStream = clientSocket.GetStream();

        serverStream.Write(outStream, 0, outStream.Length);
        //serverStream.Flush();
    }

    public static byte[] ReceiveBytes(TcpClient clientSocket, int MAX_SIZE)
    {
        Debug.WriteLine("[" + DateTime.Now.ToString("G") + "] - " + "ReceiveBytes() started.");

        NetworkStream networkStream = clientSocket.GetStream();

        byte[] bytesFrom = new byte[MAX_SIZE];
        clientSocket.ReceiveBufferSize = MAX_SIZE;

        networkStream.Read(bytesFrom, 0, (int)clientSocket.ReceiveBufferSize);

        Debug.WriteLine("[" + DateTime.Now.ToString("G") + "] - " + "ReceiveBytes(), received number of raw bytes: " + bytesFrom.Length.ToString());

        return CommonUtils.SubArray(bytesFrom, 0, MAX_SIZE);
    }

If sending the data (bytes in hex): a7 fc d0 51 0e 99 cf 0d 00 , the received data is : a7 fc d0 51 0e 99 cf 0d 53

Most likely you're seeing garbage due to packet structuring; TCP only guarantees that the correct bytes will arrive in the correct order (or a stream failure) - it says nothing about the chunks in which they arrive. Because of that, it is vital that you:

  1. catch the return value from Read , and only process that many bytes from any chunk
  2. perform your own framing - ie batching the stream into messages independent of how the pieces arrive

If your messages are always fixes size, then "2" becomes "buffer data until I have at least N bytes, then process the data in chunks of N, retaining whatever is left over, then resume buffering". But in the general case, it might be "buffer until I see a sentinel value, such as a line-feed", or "buffer until I have a complete header, then parse the header to see how much data to expect, then buffer until I have that much data".

There are tools and utilities to help make de-framing and handling backlog much simpler - for example, with the new "pipelines" API it is simply a case of inspecting the pipe and telling the pipe how much you want to consume (rather than it giving you everything and you having no way of rejecting data for now) - but switching from Stream to "pipelines" is quite a bit set of changes for most people.

In your case, you can probably use:

byte[] bytesFrom = new byte[MAX_SIZE];
int outstanding = MAX_SIZE, read, offset = 0;
while (outstanding > 0 && (read = networkStream.Read(bytesFrom, offset, outstanding)) > 0)
{
    offset += read;
    outstanding -= read;
}
if (outstanding != 0) throw new EndOfStreamException();

This creates a read loop that fills bytesFrom completely, or fails with an exception.

Stream.Read returns a value that indicates how much data was actually read. It is by no means guaranteed to be the same amount as you requested. Ignore this value at your peril.

If you're happy to allocate the memory for the entire stream, why not just copy into a MemoryStream and fish the fully buffer from that? Stream.CopyTo and Stream.CopyToAsync are nice high level abstractions that make this easy.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM