简体   繁体   English

为什么 boost::asio::ip::udp::socket::receive_from 在 Windows 中不抛出中断异常?

[英]Why doesn't boost::asio::ip::udp::socket::receive_from throw interruption exception in Windows?

volatile std::sig_atomic_t running = true;

int main()
{
  boost::asio::thread_pool tpool;
  boost::asio::signal_set signals(tpool, SIGINT, SIGTERM);
  signals.async_wait([](auto && err, int) { if (!err) running = false; });

  while(running)
  {
    std::array<std::uint8_t, 1024> data;
    socket.recieve_from(boost::asio::buffer(data), ....); // (1)
    // calc(data);
  }
  return 0;
}

If my code is blocked in the (1) line in Linux and I try raise the signal, for example, with htop then the line (1) throws exception about the interruption but in Windows it doesn't.如果我的代码在 Linux 的 (1) 行中被阻止,并且我尝试使用htop提高信号,那么 (1) 行会引发有关中断的异常,但在 Windows 中不会。 The problem in what I don't know how to exit the application.我不知道如何退出应用程序的问题。

What needs to do my program works equally in both OSs?我的程序需要做什么才能在两个操作系统中同样工作? Thanks.谢谢。

Use Windows 10 (msvc 17), Debian 11 (gcc-9), Boost 1.78.使用 Windows 10 (msvc 17)、Debian 11 (gcc-9)、Boost 1.78。

Regardless of the question how you "raise the signal" on Windows there's the basic problem that you're relying on OS specifics to cancel a synchronous operation.无论您如何在 Windows 上“发出信号”,基本问题是您依赖操作系统细节来取消同步操作。

Cancellation is an ASIO feature, but only for asynchronous operations.取消是一项 ASIO 功能,但仅适用于异步操作。 So, consider:所以,考虑:

signals.async_wait([&socket](auto&& err, int) {
    if (!err) {
        socket.cancel();
    }
});

Simplifying without a thread_pool gives eg:没有 thread_pool 的简化给出了例如:

Live On Coliru住在科利鲁

#define BOOST_ASIO_ENABLE_HANDLER_TRACKING 1
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::udp;
using boost::system::error_code;

struct Program {
    Program(asio::any_io_executor executor)
        : signals_{executor, SIGINT, SIGTERM}
        , socket_{executor} //
    {
        signals_.async_wait([this](error_code ec, int) {
            if (!ec) {
                socket_.cancel();
            }
        });

        socket_.open(udp::v4());
        socket_.bind({{}, 4444});
        receive_loop();
    }

  private:
    asio::signal_set signals_;
    udp::socket      socket_;

    std::array<std::uint8_t, 1024> data_;
    udp::endpoint                  ep_;

    void receive_loop() {
        socket_.async_receive_from( //
            asio::buffer(data_), ep_, [this](error_code ec, size_t) {
                if (!ec)
                    receive_loop();
            });
    }
};

int main() {
    asio::io_context ioc;
    Program app(ioc.get_executor());

    using namespace std::chrono_literals;
    ioc.run_for(10s); // for COLIRU
}

Prints (on coliru):打印(在大肠杆菌上):

@asio|1663593973.457548|0*1|signal_set@0x7ffe0b639998.async_wait
@asio|1663593973.457687|0*2|socket@0x7ffe0b6399f0.async_receive_from
@asio|1663593973.457700|.2|non_blocking_recvfrom,ec=system:11,bytes_transferred=0
@asio|1663593974.467205|.2|non_blocking_recvfrom,ec=system:0,bytes_transferred=13
@asio|1663593974.467252|>2|ec=system:0,bytes_transferred=13
@asio|1663593974.467265|2*3|socket@0x7ffe0b6399f0.async_receive_from
@asio|1663593974.467279|.3|non_blocking_recvfrom,ec=system:11,bytes_transferred=0
@asio|1663593974.467291|<2|
@asio|1663593975.481800|.3|non_blocking_recvfrom,ec=system:0,bytes_transferred=13
@asio|1663593975.481842|>3|ec=system:0,bytes_transferred=13
@asio|1663593975.481854|3*4|socket@0x7ffe0b6399f0.async_receive_from
@asio|1663593975.481868|.4|non_blocking_recvfrom,ec=system:11,bytes_transferred=0
@asio|1663593975.481878|<3|
@asio|1663593976.494097|.4|non_blocking_recvfrom,ec=system:0,bytes_transferred=13
@asio|1663593976.494138|>4|ec=system:0,bytes_transferred=13
@asio|1663593976.494150|4*5|socket@0x7ffe0b6399f0.async_receive_from
@asio|1663593976.494164|.5|non_blocking_recvfrom,ec=system:11,bytes_transferred=0
@asio|1663593976.494176|<4|
@asio|1663593976.495085|>1|ec=system:0,signal_number=2
@asio|1663593976.495119|1|socket@0x7ffe0b6399f0.cancel
@asio|1663593976.495129|<1|
@asio|1663593976.495151|>5|ec=system:125,bytes_transferred=0
@asio|1663593976.495162|<5|
@asio|1663593976.495184|0|socket@0x7ffe0b6399f0.close
@asio|1663593976.495244|0|signal_set@0x7ffe0b639998.cancel

So that's 3 successful receives, followed by a signal 2 (INT) and cancellation which results in ec=125 ( asio::error:operation_aborted ) and shutdown.所以这是 3 次成功接收,然后是信号 2 (INT) 和取消,导致 ec=125 ( asio::error:operation_aborted ) 并关闭。

在此处输入图像描述

Multi-threading多线程

There's likely no gain for using multiple threads, but if you do, use a strand to synchronize access to the IO objects:使用多线程可能没有任何好处,但如果这样做,请使用链来同步对 IO 对象的访问:

asio::thread_pool ioc;
Program app(make_strand(ioc));

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 boost::asio UDP 服务器,如何发送而不必先进行 receive_from? - boost::asio UDP server, how to send without having to do a receive_from first? Boost ASIO receive_from如何返回底层套接字错误? - How does Boost ASIO receive_from return underlying socket errors? 在ASIO中对UDP调用receive_from应用超时 - Applying a timeout to UDP call receive_from in ASIO 在Windows上从1_55更新为1_62后,在LIB中创建boost :: asio :: ip :: udp :: socket崩溃 - Creating a boost::asio::ip::udp::socket crashing in LIB after updating from 1_55 to 1_62 on Windows boost::asio UDP 如何在 Windows 上接收消息? - boost::asio UDP how to receive messages on Windows? boost :: asio :: ip :: tcp :: socket不会读取任何内容 - boost::asio::ip::tcp::socket doesn't read anything 在使用 boost::asio 时得到了 receive_from:错误的文件描述符 - got receive_from: Bad file descriptor while using boost::asio boost :: this_thread :: interruption_point()不会引发boost :: thread_interrupted&异常 - boost::this_thread::interruption_point() doesn't throw boost::thread_interrupted& exception 无法编译boost :: asio :: basic_datagram_socket <boost::asio::ip::udp> :: basic_datagram_socket() - Cannot compile boost::asio::basic_datagram_socket<boost::asio::ip::udp>::basic_datagram_socket() Boost asio ssl 套接字不接收完整数据 tcp 套接字和 ssl 套接字的不同行为 - Boost asio ssl socket doesn't receive complete data different behaviour of tcp socket and ssl socket
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM