简体   繁体   中英

Boost.Asio async_send question

I'm using Boost.Asio for a server application that I'm writing.

async_send requires the caller to keep ownership of the data that is being sent until the data is sent successfully. That means my code (which looks like the following) will fail, and it does, because data will no longer be a valid object.

void func()
{
    std::vector<unsigned char> data;

    // ...
    // fill data with stuff
    // ...

    socket.async_send(boost::asio::buffer(data), handler);
}

So my solution was to do something like this:

std::vector<unsigned char> data;

void func()
{        
    // ...
    // fill data with stuff
    // ...

    socket.async_send(boost::asio::buffer(data), handler)
}

But now I'm wondering if I have multiple clients, will I need to create a separate vector for each connection?

Or can I use that one single vector? If I'm able to use that single vector, if I overwrite the contents inside it will that mess up the data I'm sending to all my clients?

A possible fix would be to use a shared_ptr to hold your local vector and change the handler's signature to receive a shared_ptr so to prolong the life of the data until the sending is complete (thanks to Tim for pointing that out to me):

void handler( boost::shared_ptr<std::vector<char> > data )
{
}

void func()
{
    boost::shared_ptr<std::vector<char> > data(new std::vector<char>);
    // ...
    // fill data with stuff
    // ...

    socket.async_send(boost::asio::buffer(*data), boost:bind(handler,data));
}

I solved a similar problem by passing a shared_ptr to my data to the handler function. Since asio holds on to the handler functor until it's called, and the hander functor keeps the shared_ptr reference, the data stays allocated as long as there's an open request on it.

edit - here's some code:

Here the connection object holds on to the current data buffer being written, so the shared_ptr is to the connection object, and the bind call attaches the method functor to the object reference and the asio call keeps the object alive.

The key is that each handler must start a new asyc operation with another reference or the connection will be closed. Once the connection is done, or an error occurrs, we simply stop generating new read/write requests. One caveat is that you need to make sure you check the error object on all your callbacks.

boost::asio::async_write(
    mSocket,
    buffers,
    mHandlerStrand.wrap(
        boost::bind(
            &TCPConnection::InternalHandleAsyncWrite,
            shared_from_this(),
            boost::asio::placeholders::error,
            boost::asio::placeholders::bytes_transferred)));

void TCPConnection::InternalHandleAsyncWrite(
    const boost::system::error_code& e,
    std::size_t bytes_transferred)
{

But now I'm wondering if I have multiple clients, will I need to create a separate vector for each connection?

Yes, though each vector does not need to be in global scope. The typical solution to this problem is to retain the buffer as a member of an object, and bind a member function of that object to a functor passed to the async_write completion handler. This way the buffer will be retained in scope throughout the lifetime of the asynchronous write. The asio examples are littered with this usage of binding member functions using this and shared_from_this . In general it is preferable to use shared_from_this to simplify object lifetime, especially in the face of io_service:stop() and ~io_service() . Though for simple examples, this scaffolding is often unnecessary.

The destruction sequence described above permits programs to simplify their resource management by using shared_ptr<>. Where an object's lifetime is tied to the lifetime of a connection (or some other sequence of asynchronous operations), a shared_ptr to the object would be bound into the handlers for all asynchronous operations associated with it.

A good place to start is the async echo server due to its simplicity.

boost::asio::async_write(
    socket,
    boost::asio::buffer(data, bytes_transferred),
    boost::bind(
        &session::handle_write,
        this,
        boost::asio::placeholders::error
    )
);

The way that I've been doing it is to really take the "TCP is a stream" concept to heart. So I have a boost::asio::streambuf for each connection to represent what I send to the client.

Like most of the examples in boost, I have a tcp_connection class with an object per connection. Each one has a memeber boost::asio::streambuf response_; and when I want to send something to the client I just do this:

std::ostream responce_stream(&response_);
responce_stream << "whatever my responce message happens to be!\r\n";

boost::asio::async_write(
    socket_,
    response_,
    boost::bind(
        &tcp_connection::handle_write,
        shared_from_this(),
        boost::asio::placeholders::error,
        boost::asio::placeholders::bytes_transferred));

You can't use a single vector unless you send the same and constant data to all the clients (like a prompt message). This is caused by nature of async I/O. If you are sending, the system will keep a pointer to your buffer in its queue along with some AIO packet struct. As soon as it's done with some previous queued send operations and there's a free space in its own buffer, the system will start forming packets for your data and copy chunks of your buffer inside the corresponding places in TCP frames. So if you modify content of your buffer along the way, you'll corrupt the data sent to the client. If you're receiving, the system may optimize it even further and feed your buffer to the NIC as a target for DMA operation. In this case a significant number of CPU cycles can be saved on data copying because it's done by DMA controller. Probably, though, this optimization will work only if NIC supports hardware TCP unload.

UPDATE: On Windows, Boost.Asio uses overlapped WSA IO with completion notifications via IOCP .

Krit explained the data corruption, so I'll give you an implementation suggestion instead.

I would suggest that you use a separate vector for each send operation that is currently being executed. You probably don't want one for each connection since you might want to send several messages on the same connection sequentially without waiting for completion of the previous ones.

You will need one write buffer per connection, others have been saying to use a vector per connection as was your original idea, but I would recommend for simplicity to use a vector of strings with your new approach.

Boost.ASIO has some special cases built around using strings with its buffers for writes, which make them easier to work with.

Just a thought.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM