简体   繁体   中英

Sockets: How to send data to the client without 'waiting' on them as they receive/parse it

I have a socket server, written in C++ using boost::asio, and I'm sending data to a client.

The server sends the data out in chunks, and the client parses each chunk as it receives it. Both are pretty much single threaded right now.

What design should I use on the server to ensure that the server is just writing out the data as fast as it can and never waiting on the client to parse it? I imagine I need to do something asynchronous on the server.

I imagine changes could be made on the client to accomplish this too, but ideally the server should not wait on the client regardless of how the client is written.

I'm writing data to the socket like this:

size_t bytesWritten = m_Socket.Write( boost::asio::buffer(buffer, bufferSize));

Update:

I am going to try using Boost's mechanism to write asynchronously to the socket. See http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html

eg

 boost::asio::async_write(socket_, boost::asio::buffer(message_),
        boost::bind(&tcp_connection::handle_write, shared_from_this(),
          boost::asio::placeholders::error,
          boost::asio::placeholders::bytes_transferred));
  • Alex

You can ensure asynchronous communication by transporting the data not over TCP but over UDP. However, if you need to use TCP, let the client store the data away quickly and process it in a different thread or asynchronously with a cron job.

When you pass data to a socket, it does not wait for the receiver to process it. It does not even wait for the data to be transmitted. The data is put into an outbound queue that is processed by the OS in the background. The writing function returns how many bytes were queued for transmission, not how many bytes were actually transmitted.

If you set your socket to non-blocking, then writes should fail if they would otherwise block. You can then queue up the data however you like, and arrange for another attempt to be made later to write it. I don't know how to set socket options in the boost socket API, but that's what you're looking for.

But this is probably more trouble than it's worth. You'd need to select a socket that's ready for writing, presumably from several open simultaneously, shove more data into it until it's full, and repeat. I don't know if the boost sockets API has an equivalent of select , so that you can wait on multiple sockets at once until any of them is ready to write.

The reason that servers typically start a thread (or spawn a process) per client connection is precisely so that they can get on with serving other clients while they're waiting on I/O, while avoiding implementing their own queues. The simplest way to "arrange for another attempt later" is just to do blocking I/O in a dedicated thread.

What you can't do, unless boost has done something unusual in its sockets API, is require the OS or the sockets library to queue up arbitrary amounts of data for you without blocking. There may be an async API which will call you back when the data is written.

Continuing from the comments on Stefan's post:

It is definitely possible to buffer on either the client or server side. But make sure to consider what Neil wrote. If we just begin to buffer data blindly and if the processing can never keep up with the sending then our buffer will grow in a fashion we probably don't want.

Now I recently implemented a straightforward 'NetworkPipe' which was meant to function as a connection between a single client/server, server/client where the outside user doesn't know/care if the Pipe is the client or the server. I implemented a buffering situation similar to what you are asking about, how? Well the class was threaded, this was about the only way I could figure out to cleanly buffer the data. Here is the basic process that I followed, and note that I set a maximum size on the Pipes:

  1. Process 1 starts pipe, defaults to server. Now internal thread waits for client.
  2. Process 2 starts pipe, already a server, defaults to Client.
  3. We are now connected, first thing to do is exchange maximum buffer sizes.
  4. Process 1 writes data (it notes that the other end has an empty buffer [see #3])
  5. Process 2's internal thread (now waiting on a select() for the socket) sees that data is sent and reads it, buffers it. Process 2 now sends back the new buffered size to P1.

So thats a really simplified version but basically by threading it I can always be waiting on a blocking select call, as soon as data arrives I can read and buffer it, I send back the new buffered size. You could do something similar, and buffer the data blindly, its actually quite a bit simpler because you don't have to exchange buffer sizes, but probably a bad idea. So the above example allowed external users to read/write data without blocking their thread (unless the buffer on the other end is full).

I implemented a solution using the boost::asio::async_write method.

Basically:

  • I have one thread per client (my threads are doing CPU bound work)
  • As each thread accumulates some amount of data, it writes it to the socket using async_write, not caring if previous writes have completed
  • The code is careful to manage the lifetime of the socket and the data buffers being written out because the CPU processing finishes before all the data has written out

This works well for me. This enables the server thread to finish as soon as its done its CPU work.

Overall the the time for the client to receive and parse all of its data went down. Similarly the time (clock on the wall time) that the server spends on each client goes down.

Code snippet:

void SocketStream::Write(const char* data, unsigned int dataLength)
{
    // Make a copy of the data
    // we'll delete it when we get called back via HandleWrite
    char* dataCopy = new char[dataLength];
    memcpy( dataCopy,  data, dataLength );

    boost::asio::async_write
        (
        *m_pSocket,
        boost::asio::buffer(dataCopy, dataLength),
        boost::bind
            (
            &SocketStream::HandleWrite,                     // the address of the method to callback when the write is done
            shared_from_this(),                             // a pointer to this, using shared_from_this to keep us alive
            dataCopy,                                       // first parameter to the HandleWrite method
            boost::asio::placeholders::error,               // placeholder so that async_write can pass us values
            boost::asio::placeholders::bytes_transferred
            )
        );
}

void SocketStream::HandleWrite(const char* data, const boost::system::error_code& error, size_t bytes_transferred)
{
    // Deallocate the buffer now that its been written out
    delete data;

    if ( !error )
    {
        m_BytesWritten += bytes_transferred;
    }
    else
    {
        cout << "SocketStream::HandleWrite received error: " << error.message().c_str() << endl;
    }
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM