简体   繁体   English

Boost.asio客户端/服务器TIME_WAIT为什么?

[英]Boost.asio client/server TIME_WAIT why?

I am trying to run some functional testing into some API. 我正在尝试对某些API进行一些功能测试。

My API has a client and a server side. 我的API有一个客户端和一个服务器端。 The client side just connects and sets a flag. 客户端只是连接并设置一个标志。 The server just accepts connections. 服务器仅接受连接。

This is a test case I have: 这是我有一个测试用例:

BOOST_AUTO_TEST_CASE(client_can_connect_to_server) {
        boost::asio::io_service serverService;
        std::thread serverLoop([&serverService] { serverService.run(); }); 

        boost::asio::io_service clientService; 
        std::thread clientLoop([&clientService] { clientService.run(); }); 

        //    std::this_thread::sleep_for(10ms); Maybe wait for server loop to start...?


        auto connectionSuccess = connectTo("127.0.0.1", "54321", kAuthData, ioService); 

        BOOST_REQUIRE(blockForDurationOrWhile
                      (timeout,
                       [&] { return connectionSuccess.wait_for(0s) != std::future_status::ready; }) == ExitStatus::ConditionSatisfied);

        serverService.stop();
        clientLoop.join();
        serverService.join();    
    }

I am having trouble with 2 things here: 我在这里遇到两件事:

  1. The connection is timed out more than half of the time, but sometimes works. 连接超时时间超过一半,但有时可以正常工作。
  2. When finishing the program through the sucessful path to the end of the test, it seems that netstat shows some kind of socket leaks with the state TIME_WAIT . 通过成功的路径完成程序直到测试结束时,netstat似乎显示了某种类型的套接字泄漏,状态为TIME_WAIT I am closing and shutting down sockets. 我正在关闭和关闭套接字。 I just cannot figure out what is wrong. 我只是不知道出什么问题了。 This is shown for around 30-45 seconds after the app exits: 在应用退出后,显示时间约为30-45秒:

    tcp 0 0 ip6-localhost:52256 ip6-localhost:54321 TIME_WAIT tcp 0 0 ip6-localhost:52256 ip6-localhost:54321 TIME_WAIT
    tcp 0 0 ip6-localhost:54321 ip6-localhost:52256 TIME_WAIT tcp 0 0 ip6-localhost:54321 ip6-localhost:52256 TIME_WAIT

Code for client and server code is below: 客户端和服务器代码的代码如下:

std::future<bool> connectTo(std::string const & host,
                            std::string const & port,
                            std::string const & authData,
                            boost::asio::io_service & s,
                            std::chrono::high_resolution_clock::duration timeout = kCortexTryConnectTimeout) {
    using namespace boost::asio;
    using boost::asio::ip::tcp;

    std::promise<bool> p;
    auto res = p.get_future();
    spawn
        (s,
         [&s, host, port, p = std::move(p)](yield_context yield) mutable {
            tcp::socket socket(s);
            BOOST_SCOPE_EXIT(&socket) {
                std::cout << "Closing client socket\n";
                if (socket.is_open()) {
                    boost::system::error_code ec{};
                    socket.shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec);
                    socket.close();
                    std::cout << "Client socket closed\n";
                }
            } BOOST_SCOPE_EXIT_END

            std::cout << "Client trying to connect\n";
            tcp::resolver resolver(s);
            boost::system::error_code ec{boost::asio::error::operation_aborted};
            boost::asio::async_connect(socket, resolver.resolve({host, port}), yield[ec]);
            std::cout << "Client Connected\n";
            if (!ec) p.set_value(true);
            else p.set_value(false);
        });
    return res;
}

The server handles connections: 服务器处理连接:

class ConnectionsAcceptorTask {
public:
    //Session handling for Cortex. Will move out of here
    class Session : public std::enable_shared_from_this<Session> {
    public:
        explicit Session(boost::asio::ip::tcp::socket socket) : _socket(std::move(socket)) {}
        void start() {}

        ~Session() {
            if (_socket.is_open()) {
                boost::system::error_code ec{};
                _socket.shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec);
                _socket.close();
            }
        }
    private:
        boost::asio::ip::tcp::socket _socket;
    };

    ConnectionsAcceptorTask(unsigned int port,
                            io_service & s)
        : _port(port),
          _ioService(&s)
    {}

    void operator()() {
        namespace ba = boost::asio;
        using boost::asio::ip::tcp;
        ba::spawn
            (*_ioService,
             [s = _ioService, port = this->_port](ba::yield_context yield) {

                tcp::acceptor acceptor
                    (*s,
                     tcp::endpoint(tcp::v4(), port));
                acceptor.set_option(boost::asio::socket_base::reuse_address(true));

                BOOST_SCOPE_EXIT(&acceptor) {
                    std::cout << "Closing acceptor\n";
                    if (acceptor.is_open()) {
                        acceptor.close();
                        std::cout << "Acceptor closed\n";
                    }
                } BOOST_SCOPE_EXIT_END

                for (;;) {
                    boost::system::error_code ec{};
                    tcp::socket socket(*s);
                    acceptor.async_accept(socket, yield[ec]); 

                    if (!ec) std::make_shared<Session>(std::move(socket))->start();
                }
            });
    }
private:
    unsigned int _port = 0;
    boost::asio::io_service * _ioService;
};

The TIME_WAIT state is not a socket leak. TIME_WAIT状态不是套接字泄漏。 It is a normal part of TCP connection tear-down, specified in RFC 793. 这是RFC 793中指定的TCP连接拆除的正常部分。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM