简体   繁体   English

如何在 boost asio 中设置阻止 sockets 的超时?

[英]How to set a timeout on blocking sockets in boost asio?

Is there a way to cancel a pending operation (without disconnect) or set a timeout for the boost library functions?有没有办法取消挂起的操作(不断开连接)或为 boost 库函数设置超时?

Ie I want to set a timeout on blocking socket in boost asio?即我想在 boost asio 中设置阻塞套接字的超时时间?

socket.read_some(boost::asio::buffer(pData, maxSize), error_); socket.read_some(boost::asio::buffer(pData, maxSize), error_);

Example: I want to read some from the socket, but I want to throw an error if 10 seconds have passed.示例:我想从套接字中读取一些内容,但如果 10 秒过去了,我想抛出一个错误。

When this question was asked, I guess ASIO did not have any example on how to accomplish what the OP needed, that is to timeout a blocking operation such as a blocking socket operation.当问到这个问题时,我猜 ASIO 没有任何关于如何完成 OP 所需的示例,即超时阻塞操作,例如阻塞套接字操作。 Now there exists examples to show you exactly how to do this.现在有一些例子可以向您展示如何做到这一点。 the example seems long, but that is because it is WELL commented.这个例子看起来很长,但那是因为它的评论很好。 It shows how to use the ioservice in a 'one shot' kind of mode.它展示了如何以“一次性”模式使用 ioservice。

I think the example is a great solution.我认为这个例子是一个很好的解决方案。 The other solutions here break portability and don't take advantage of ioservice.这里的其他解决方案破坏了可移植性并且没有利用 ioservice。 if portability is not important and the ioservice seems like to much overhead --THEN-- you should not be using ASIO.如果可移植性不重要并且 ioservice 似乎开销很大——那么——你不应该使用 ASIO。 No matter what, you will have an ioservice created (almost all ASIO functionality depends on it, even sync sockets) so, take advantage of it.无论如何,您将创建一个 ioservice(几乎所有 ASIO 功能都依赖于它,甚至同步套接字),因此,请利用它。

Timeout a blocking asio tcp operation 超时阻塞 asio tcp 操作

Timeout a blocking asio udp operation 超时阻塞 asio udp 操作

The ASIO documentation has been updated, so check it out for new examples on how to overcome some of the 'gotchas' ASIO use to have. ASIO 文档已更新,因此请查看有关如何克服 ASIO 使用中的一些“陷阱”的新示例。

TL;DR TL; 博士

socket.set_option(boost::asio::detail::socket_option::integer<SOL_SOCKET, SO_RCVTIMEO>{ 200 });

FULL ANSWER This question keep being asked over and over again for many years.完整答案 这个问题多年来不断被问到。 Answers I saw so far are quite poor.到目前为止我看到的答案很差。 I'll add this info right here in one of the first occurrences of this question.我将在此问题的第一次出现中添加此信息。

Everybody trying to use ASIO to simplify their networking code would be perfectly happy if the author would just add an optional parameter timeout to all sync and async io functions.如果作者只是为所有同步和异步 io 函数添加一个可选参数超时,那么每个尝试使用 ASIO 来简化他们的网络代码的人都会非常高兴。 Unfortunately, this is unlikely to happen (in my humble opinion, just for ideological reasons, after all, AS in ASIO is for a reason).不幸的是,这不太可能发生(以我的拙见,只是出于意识形态的原因,毕竟 ASIO 中的 AS 是有原因的)。

So these are the ways to skin this poor cat available so far, none of them especially appetizing.所以这些是目前可用的给这只可怜的猫剥皮的方法,没有一种特别开胃。 Let's say we need 200ms timeout.假设我们需要 200 毫秒的超时。

1) Good (bad) old socket API: 1)好的(坏的)旧套接字API:

const int timeout = 200;
::setsockopt(socket.native_handle(), SOL_SOCKET, SO_RCVTIMEO, (const char *)&timeout, sizeof timeout);//SO_SNDTIMEO for send ops

Please note those peculiarities: - const int for timeout - on Windows the required type is actually DWORD, but the current set of compilers luckily has it the same, so const int will work both in Win and Posix world.请注意这些特性: - 用于超时的 const int - 在 Windows 上,所需的类型实际上是 DWORD,但幸运的是当前的编译器集具有相同的功能,因此 const int 在 Win 和 Posix 世界中都可以使用。 - (const char*) for value. - (const char*) 为值。 On Windows const char* is required, Posix requires const void*, in C++ const char* will convert to const void* silently while the opposite is not true.在 Windows 上需要 const char*,Posix 需要 const void*,在 C++ 中,const char* 将转换为 const void*,而反之则不然。

Advantages: works and probably will always work as the socket API is old and stable.优点:有效并且可能会一直有效,因为套接字 API 旧且稳定。 Simple enough.足够简单。 Fast.快速地。 Disadvantages: technically might require appropriate header files (different on Win and even different UNIX flavors) for setsockopt and the macros, but current implementation of ASIO pollutes global namespace with them anyway.缺点:从技术上讲,setsockopt 和宏可能需要适当的头文件(在 Win 上甚至不同的 UNIX 风格),但是 ASIO 的当前实现无论如何都会污染全局命名空间。 Requires a variable for timeout.需要一个超时变量。 Not type-safe.不是类型安全的。 On Windows, requires that the socket is in overlapped mode to work (which current ASIO implementation luckily uses, but it is still an implementation detail).在 Windows 上,要求套接字处于重叠模式才能工作(幸运的是当前的 ASIO 实现使用了它,但它仍然是一个实现细节)。 UGLY!丑陋的!

2) Custom ASIO socket option: 2) 自定义 ASIO 套接字选项:

typedef boost::asio::detail::socket_option::integer<SOL_SOCKET, SO_RCVTIMEO> rcv_timeout_option; //somewhere in your headers to be used everywhere you need it
//...
socket.set_option(rcv_timeout_option{ 200 });

Advantages: Simple enough.优点:够简单。 Fast.快速地。 Beautiful (with typedef).漂亮(使用 typedef)。 Disadvantages: Depends on ASIO implementation detail, which might change (but OTOH everything will change eventually, and such detail is less likely to change then public APIs subject to standardization).缺点:取决于 ASIO 实现细节,这可能会发生变化(但 OTOH 一切最终都会发生变化,与受标准化约束的公共 API 相比,此类细节不太可能发生变化)。 But in case this happens, you'll have to either write a class according to https://www.boost.org/doc/libs/1_68_0/doc/html/boost_asio/reference/SettableSocketOption.html (which is of course a major PITA thanks to obvious overengineering of this part of ASIO) or better yet revert to 1.但是如果发生这种情况,您必须根据https://www.boost.org/doc/libs/1_68_0/doc/html/boost_asio/reference/SettableSocketOption.html (这当然是一个主要 PITA 由于明显对 ASIO 的这一部分进行了过度设计)或更好地恢复为 1。

3) Use C++ async/future facilities. 3) 使用 C++ 异步/未来设施。

#include <future>
#include <chrono>
//...
auto status = std::async(std::launch::async, [&] (){ /*your stream ops*/ })
    .wait_for(std::chrono::milliseconds{ 200 });
switch (status)
    {
    case std::future_status::deferred:
    //... should never happen with std::launch::async
        break;
    case std::future_status::ready:
    //...
        break;
    case std::future_status::timeout:
    //...
        break;
    }

Advantages: standard.优点:标准。 Disadvantages: always starts a new thread (in practice), which is relatively slow (might be good enough for clients, but will lead to DoS vulnerability for servers as threads and sockets are "expensive" resources).缺点:总是启动一个新线程(在实践中),这相对较慢(对于客户端来说可能足够好,但会导致服务器的 DoS 漏洞,因为线程和套接字是“昂贵的”资源)。 Don't try to use std::launch::deferred instead of std::launch::async to avoid new thread launch as wait_for will always return future_status::deferred without trying to run the code.不要尝试使用 std::launch::deferred 而不是 std::launch::async 来避免新线程启动,因为 wait_for 将始终返回 future_status::deferred 而不尝试运行代码。

4) The method prescribed by ASIO - use async operations only (which is not really the answer to the question). 4)ASIO 规定的方法 - 仅使用异步操作(这不是问题的真正答案)。

Advantages: good enough for servers too if huge scalability for short transactions is not required.优点:如果不需要为短事务提供巨大的可扩展性,那么对于服务器来说也足够了。 Disadvantages: quite wordy (so I will not even include examples - see ASIO examples).缺点:很罗嗦(所以我什至不会包括示例 - 请参阅 ASIO 示例)。 Requires very careful lifetime management of all your objects used both by async operations and their completion handlers, which in practice requires all classes containing and using such data in async operations be derived from enable_shared_from_this, which requires all such classes allocated on heap, which means (at least for short operations) that scalability will start taper down after about 16 threads as every heap alloc/dealloc will use a memory barrier.需要对异步操作及其完成处理程序使用的所有对象进行非常仔细的生命周期管理,这实际上要求在异步操作中包含和使用此类数据的所有类都从 enable_shared_from_this 派生,这需要在堆上分配所有此类类,这意味着(至少对于短期操作而言)可扩展性将在大约 16 个线程后开始逐渐减小,因为每个堆分配/释放分配都将使用内存屏障。

You could do an async_read and also set a timer for your desired time out.您可以执行 async_read 并为您想要的超时设置一个计时器。 Then if the timer fires, call cancel on your socket object.然后,如果计时器触发,请在您的套接字对象上调用取消。 Otherwise if your read happens, you can cancel your timer.否则,如果您的阅读发生,您可以取消您的计时器。 This requires you to use an io_service object of course.这当然需要您使用 io_service 对象。

edit: Found a code snippet for you that does this编辑:为您找到了执行此操作的代码片段

http://lists.boost.org/Archives/boost/2007/04/120339.php http://lists.boost.org/Archives/boost/2007/04/120339.php

Under Linux/BSD the timeout on I/O operations on sockets is directly supported by the operating system.在 Linux/BSD 下,操作系统直接支持套接字上 I/O 操作的超时。 The option can be enabled via setsocktopt() .该选项可以通过setsocktopt()启用。 I don't know if boost::asio provides a method for setting it or exposes the socket scriptor to allow you to directly set it -- the latter case is not really portable.我不知道boost::asio提供了一种设置方法或公开套接字脚本以允许您直接设置它 - 后一种情况并不是真正可移植的。

For a sake of completeness here's the description from the man page:为了完整起见,这里是手册页的描述:

SO_RCVTIMEO and SO_SNDTIMEO SO_RCVTIMEOSO_SNDTIMEO

 Specify the receiving or sending timeouts until reporting an error. The argument is a struct timeval. If an input or output function blocks for this period of time, and data has been sent or received, the return value of that function will be the amount of data transferred; if no data has been transferred and the timeout has been reached then -1 is returned with errno set to EAGAIN or EWOULDBLOCK just as if the socket was specified to be non-blocking. If the timeout is set to zero (the default) then the operation will never timeout. Timeouts only have effect for system calls that perform socket I/O (eg, read(2), recvmsg(2), send(2), sendmsg(2)); timeouts have no effect for select(2), poll(2), epoll_wait(2), etc.

I had the same question, and after some research, the simplest, cleanest solution I could come up with was to get the underlying native socket, and do a select until there was data to read.我有同样的问题,经过一些研究,我能想出的最简单、最干净的解决方案是获取底层本机套接字,然后执行选择直到有数据要读取。 Select will take a timeout parameter. Select 将采用超时参数。 Of course, working with the native socket starts to go against the point of using asio in the first place, but again, this seems to be the cleanest way.当然,使用本机套接字开始与首先使用 asio 背道而驰,但同样,这似乎是最干净的方法。 As far as I could tell, asio doesn't provide a way to do this for synchronous usage easily.据我所知,asio 没有提供一种方法来轻松实现同步使用。 Code:代码:

        // socket here is:  boost::shared_ptr<boost::asio::ip::tcp::socket> a_socket_ptr

        // Set up a timed select call, so we can handle timeout cases.

        fd_set fileDescriptorSet;
        struct timeval timeStruct;

        // set the timeout to 30 seconds
        timeStruct.tv_sec = 30;
        timeStruct.tv_usec = 0;
        FD_ZERO(&fileDescriptorSet);

        // We'll need to get the underlying native socket for this select call, in order
        // to add a simple timeout on the read:

        int nativeSocket = a_socket_ptr->native();

        FD_SET(nativeSocket,&fileDescriptorSet);

        select(nativeSocket+1,&fileDescriptorSet,NULL,NULL,&timeStruct);

        if(!FD_ISSET(nativeSocket,&fileDescriptorSet)){ // timeout

                std::string sMsg("TIMEOUT on read client data. Client IP: ");

                sMsg.append(a_socket_ptr->remote_endpoint().address().to_string());

                throw MyException(sMsg);
        }

        // now we know there's something to read, so read
        boost::system::error_code error;
        size_t iBytesRead = a_socket_ptr->read_some(boost::asio::buffer(myVector), error);

        ...

Perhaps this will be useful for your situation.也许这对您的情况有用。

Following on to what grepsedawk has mentioned.继 grepsedawk 提到的内容。 There are a few examples showing how to cancel long running asynchronous operations after a period of time, under the Timeouts section within asio doco.在 asio doco 的Timeouts部分下,有几个示例展示了如何在一段时间后取消长时间运行的异步操作。 Boost Asio Examples . 提升 Asio 示例 Async TCP client helped me the most. 异步 TCP 客户端对我帮助最大。

Happy Asyncing :)快乐异步:)

Even years after the original question, there is still not a satisfying answer.即使在最初的问题出现多年后,仍然没有令人满意的答案。

Manually using select is not a good option手动使用 select 不是一个好的选择

  1. file descriptor number must be less than 1024文件描述符数必须小于 1024
  2. FD may be spuriously reported as ready due to wrong checksum.由于错误的校验和,FD 可能被虚假地报告为就绪。

Call io_service.run_one() is also a bad idea, because there may be other async options that needs an io_service to always run() .调用io_service.run_one()也是一个坏主意,因为可能还有其他异步选项需要 io_service 来始终run() And boost's document about blocking tcp client is hard to comprehend.并且boost关于阻塞tcp客户端的文档很难理解。

So here is my solution.所以这是我的解决方案。 The key idea is the following:关键思想如下:

{
    Semaphore r_sem;
    boost::system::error_code r_ec;
    boost::asio::async_read(s,buffer,
                            [this, &r_ec, &r_sem](const boost::system::error_code& ec_, size_t) {
                                r_ec=ec_;
                                r_sem.notify();
                            });
    if(!r_sem.wait_for(std::chrono::seconds(3))) // wait for 3 seconds
    {
        s.cancel();
        r_sem.wait();
        throw boost::system::system_error(boost::asio::error::try_again);
    }
    else if(r_ec)
        throw boost::system::system_error(r_ec);
}

Here Semaphore is just a mutex and a condition_variable.这里的Semaphore只是一个互斥体和一个条件变量。
wait_for is implemented by http://en.cppreference.com/w/cpp/thread/condition_variable/wait_for wait_forhttp://en.cppreference.com/w/cpp/thread/condition_variable/wait_for实现

Full code is at https://github.com/scinart/cpplib/blob/master/include/asio.hpp完整代码位于https://github.com/scinart/cpplib/blob/master/include/asio.hpp
Example: https://github.com/scinart/cpplib/blob/6e9a1690bf68971b809be34dfe432949d9a9f727/standalone_example/boost_block_tcp_client_server.cpp示例: https : //github.com/scinart/cpplib/blob/6e9a1690bf68971b809be34dfe432949d9a9f727/standalone_example/boost_block_tcp_client_server.cpp

-- update -- Example link updated. -- 更新 -- 示例链接已更新。

SO_RCVTIMEO and SO_SNDTIMEO take in a timeval struct from "sys/time.h" instead of an int . SO_RCVTIMEOSO_SNDTIMEO"sys/time.h"而不是int获取timeval结构。 So @Pavel Verevkin 's option 1. would need to take in a timeval instead of an int and option 2. would require implementing a class since boost::asio::detail::socket_option::integer only stores a single integer value.所以@Pavel Verevkin 的选项 1. 需要采用timeval而不是int并且选项 2. 需要实现一个类,因为boost::asio::detail::socket_option::integer只存储一个整数值。

Caution : Using SO_RCVTIMEO may not always help with timeout on blocking calls.注意:使用SO_RCVTIMEO可能并不总是有助于阻止调用超时。 I ran into a problem on *nix systems with infinite blocking call in the pool (for a more detailed explanation, see SO_RCVTIME and SO_RCVTIMEO not affecting Boost.Asio operations ) when everything worked on Windows .当一切都在Windows上运行时,我在pool无限阻塞调用的*nix系统上遇到了问题(有关更详细的解释,请参阅SO_RCVTIME 和 SO_RCVTIMEO 不影响 Boost.Asio 操作)。 The use of the non_blocking method and the corresponding handling of error::would_block ( WSAEWOULDBLOCK ) and error::try_again ( EAGAIN ) errors helped me. non_blocking方法的使用以及对error::would_block ( WSAEWOULDBLOCK ) 和error::try_again ( EAGAIN ) 错误的相应处理帮助了我。

You can wrap the synchronous calls into futures and wait for it to complete with a timeout (wait_timeout). 您可以将同步调用包装到future中,并等待它以超时(wait_timeout)完成。

http://www.boost.org/doc/libs/1_47_0/doc/html/thread/synchronization.html#thread.synchronization.futures http://www.boost.org/doc/libs/1_47_0/doc/html/thread/synchronization.html#thread.synchronization.futures

Certainly not one size fits all, but works well for eg circumventing slow connect timeouts. 当然不是一种尺寸适合所有,但适用于例如规避慢速连接超时。

在 *nix 上,您将使用 alarm(),因此您的套接字调用将因 EINTR 而失败

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM