简体   繁体   中英

Handling multiple clients with async_accept

I'm writing a secure SSL echo server with boost ASIO and coroutines. I'd like this server to be able to serve multiple concurrent clients, this is my code

 try {
    boost::asio::io_service io_service;

    boost::asio::spawn(io_service, [&io_service](boost::asio::yield_context yield) {
      auto ctx = boost::asio::ssl::context{ boost::asio::ssl::context::sslv23 };
      ctx.set_options(
        boost::asio::ssl::context::default_workarounds
        | boost::asio::ssl::context::no_sslv2
        | boost::asio::ssl::context::single_dh_use);
      ctx.use_private_key_file(..); // My data setup
      ctx.use_certificate_chain_file(...); // My data setup

      boost::asio::ip::tcp::acceptor acceptor(io_service,
        boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port));

      for (;;) {

        boost::asio::ssl::stream<boost::asio::ip::tcp::socket> sock{ io_service, ctx };
        acceptor.async_accept(sock.next_layer(), yield);

        sock.async_handshake(boost::asio::ssl::stream_base::server, yield);

        auto ec = boost::system::error_code{};
        char data_[1024];
        auto nread = sock.async_read_some(boost::asio::buffer(data_, 1024), yield[ec]);

        if (ec == boost::asio::error::eof)
          return; //connection closed cleanly by peer
        else if (ec)
          throw boost::system::system_error(ec); //some other error, is this desirable?

        sock.async_write_some(boost::asio::buffer(data_, nread), yield[ec]);

        if (ec == boost::asio::error::eof)
          return; //connection closed cleanly by peer
        else if (ec)
          throw boost::system::system_error(ec); //some other error

        // Shutdown gracefully
        sock.async_shutdown(yield[ec]);
        if (ec && (ec.category() == boost::asio::error::get_ssl_category())
          && (SSL_R_PROTOCOL_IS_SHUTDOWN == ERR_GET_REASON(ec.value())))
        {
          sock.lowest_layer().close();
        }
      }

    });

    io_service.run();
  }
  catch (std::exception& e)
  {
    std::cerr << "Exception: " << e.what() << "\n";
  }

Anyway I'm not sure if the code above will do: in theory calling async_accept will return control to the io_service manager.

Will another connection be accepted if one has already been accepted, ie it's already past the async_accept line?

It's a bit hard to understand the specifics of your question, since the code is incomplete (eg, there's a return in your block, but it's unclear what is that block part of).

Notwithstanding, the documentation contains an example of a TCP echo server using coroutines . It seems you basically need to add SSL support to it, to adapt it to your needs.

If you look at main , it has the following chunk:

boost::asio::spawn(io_service,
    [&](boost::asio::yield_context yield)
    {
      tcp::acceptor acceptor(io_service,
        tcp::endpoint(tcp::v4(), std::atoi(argv[1])));

      for (;;)
      {
        boost::system::error_code ec;
        tcp::socket socket(io_service);
        acceptor.async_accept(socket, yield[ec]);
        if (!ec) std::make_shared<session>(std::move(socket))->go();
      }
    });

This loops endlessly, and, following each (successful) call to async_accept , handles accepting the next connection (while this connection and others might still be active).

Again, I'm not sure about your code, but it contains exits from the loop like

return; //connection closed cleanly by peer

To illustrate the point, here are two applications.

The first is a Python multiprocessing echo client, adapted from PMOTW :

import socket
import sys
import multiprocessing

def session(i):
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

    server_address = ('localhost', 5000)
    print 'connecting to %s port %s' % server_address
    sock.connect(server_address)
    print 'connected'

    for _ in range(300):
        try:

            # Send data
            message = 'client ' + str(i) + ' message'
            print 'sending "%s"' % message
            sock.sendall(message)

            # Look for the response
            amount_received = 0
            amount_expected = len(message)

            while amount_received < amount_expected:
                data = sock.recv(16)
                amount_received += len(data)
                print 'received "%s"' % data

        except:
            print >>sys.stderr, 'closing socket'
            sock.close()

if __name__ == '__main__':
    pool = multiprocessing.Pool(8)
    pool.map(session, range(8))

The details are not that important (although it's Python, and therefore easy to read), but the point is that it opens up 8 processes, and each engages the same asio echo server (below) with 300 messages.

When run, it outputs

...
received "client 1 message"
sending "client 1 message"
received "client 2 message"
sending "client 2 message"
received "client 3 message"
received "client 0 message"
sending "client 3 message"
sending "client 0 message"
...

showing that the echo sessions are indeed interleaved.

Now for the echo server. I've slightly adapted the example from the docs :

#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>

using boost::asio::ip::tcp;

class session :
    public std::enable_shared_from_this<session> {

public:
    session(tcp::socket socket) : socket_(std::move(socket)) {}

    void start() { do_read(); }

private:
    void do_read() {
        auto self(
            shared_from_this());
        socket_.async_read_some(
            boost::asio::buffer(data_, max_length),
            [this, self](boost::system::error_code ec, std::size_t length) {
                 if(!ec)
                     do_write(length);
            });
    }

    void do_write(std::size_t length) {
        auto self(shared_from_this());
        socket_.async_write_some(
            boost::asio::buffer(data_, length),
            [this, self](boost::system::error_code ec, std::size_t /*length*/) {
                if (!ec)
                    do_read();
            });
    }

private:
    tcp::socket socket_;
    enum { max_length = 1024 };
    char data_[max_length];
};

class server {
public:
    server(boost::asio::io_service& io_service, short port) :
            acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
            socket_(io_service) {
        do_accept();
    }

private:
    void do_accept() {
        acceptor_.async_accept(
            socket_,
            [this](boost::system::error_code ec) {
                if(!ec)
                    std::make_shared<session>(std::move(socket_))->start();

                do_accept();
            });
    }

    tcp::acceptor acceptor_;
    tcp::socket socket_;
};

int main(int argc, char* argv[]) {
    const int port = 5000;
    try {
        boost::asio::io_service io_service;

        server s{io_service, port};

        io_service.run();
    }
    catch (std::exception& e) {
        std::cerr << "Exception: " << e.what() << "\n";
    }
}

This shows that this server indeed interleaves.

Note that this is not the coroutine version. While I once played with the coroutine version a bit, I just couldn't get it to build on my current box (also, as sehe notes in the comments below, you might anyway prefer this more mainstream version for now).

However, this is not a fundamental difference, wrt your question. The non-coroutine version has callbacks explicitly explicitly launching new operations supplying the next callback; the coroutine version uses a more sequential-looking paradigm. Each call returns to asio 's control loop in both versions, which monitors all the current operations which can proceed.

From the asio coroutine docs :

Coroutines let you create a structure that mirrors the actual program logic. Asynchronous operations don't split functions, because there are no handlers to define what should happen when an asynchronous operation completes. Instead of having handlers call each other, the program can use a sequential structure.

It's not that the sequential structure makes all operations sequential - that would eradicate the entire need for asio .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM