简体   繁体   English

boost :: asio异步服务器设计

[英]boost::asio async server design

Currently I'm using design when server reads first 4 bytes of stream then read N bytes after header decoding. 目前,服务器正在读取流的前4个字节,然后在头解码后读取N个字节时,我正在使用设计。

But I found that time between first async_read and second read is 3-4 ms. 但是我发现第一次async_read和第二次读取之间的时间是3-4毫秒。 I just printed in console timestamp from callbacks for measuring. 我只是在控制台时间戳中打印了用于测量的回调。 I sent 10 bytes of data in total. 我总共发送了10个字节的数据。 Why it takes so much time to read? 为什么要花这么长时间阅读?

I running it in debug mode but I think that 1 connection for debug is not so much to have a 3 ms delay between reads from socket. 我在调试模式下运行它,但我认为用于调试的1个连接并不多,因此从套接字读取之间有3毫秒的延迟。 Maybe I need another approach to cut TCP stream on "packets"? 也许我需要另一种方法来削减“数据包”上的TCP流?

UPDATE: I post some code here 更新:我在这里发布一些代码

void parseHeader(const boost::system::error_code& error)
        {
            cout<<"[parseHeader] "<<lib::GET_SERVER_TIME()<<endl;
            if (error) {
                close();
                return;
            }
            GenTCPmsg::header result = msg.parseHeader();
            if (result.error == GenTCPmsg::parse_error::__NO_ERROR__) {
                msg.setDataLength(result.size);
                boost::asio::async_read(*socket, 
                    boost::asio::buffer(msg.data(), result.size),
                    (*_strand).wrap(
                    boost::bind(&ConnectionInterface::parsePacket, shared_from_this(), boost::asio::placeholders::error)));
            } else {
                close();
            }
        }
        void parsePacket(const boost::system::error_code& error)
        {
            cout<<"[parsePacket] "<<lib::GET_SERVER_TIME()<<endl;
            if (error) {
                close();
                return;
            }
            protocol->parsePacket(msg);
            msg.flush();
            boost::asio::async_read(*socket, 
                boost::asio::buffer(msg.data(), config::HEADER_SIZE),
                (*_strand).wrap(
                boost::bind(&ConnectionInterface::parseHeader, shared_from_this(), boost::asio::placeholders::error)));
        }

As you see unix timestamps differ in 3-4 ms. 如您所见,unix时间戳在3-4毫秒内有所不同。 I want to understand why so many time elapse between parseHeader and parsePacket. 我想了解为什么parseHeader和parsePacket之间要经过这么多时间。 This is not a client problem, summary data is 10 bytes, but i cant sent much much more, delay is exactly between calls. 这不是客户端问题,摘要数据为10字节,但我无法发送更多信息,延迟恰恰在两次调用之间。 I'm using flash client version 11. What i do is just send ByteArray through opened socket. 我正在使用Flash客户端版本11。我只是通过打开的套接字发送ByteArray。 I don't sure that delays on client. 我不确定客户端是否会延迟。 I send all 10 bytes at once. 我一次发送所有10个字节。 How can i debug where actual delay is? 我该如何调试实际延迟时间?

There are far too many unknowns to identify the root cause of the delay from the posted code. 太多的未知因素无法从发布的代码中识别出延迟的根本原因。 Nevertheless, there are a few approaches and considerations that can be taken to help to identify the problem: 但是,可以采取一些方法和考虑因素来帮助确定问题:

  • Enable handler tracking for Boost.Asio 1.47+. 为Boost.Asio 1.47+启用处理程序跟踪 Simply define BOOST_ASIO_ENABLE_HANDLER_TRACKING and Boost.Asio will write debug output, including timestamps, to the standard error stream. 只需定义BOOST_ASIO_ENABLE_HANDLER_TRACKING和Boost.Asio将调试输出(包括时间戳)写入标准错误流。 These timestamps can be used to help filter out delays introduced by application code ( parseHeader() , parsePacket() , etc.). 这些时间戳可用于帮助滤除应用程序代码( parseHeader()parsePacket()等)引入的延迟。
  • Verify that byte-ordering is being handled properly. 验证是否正确处理了字节顺序 For example, if the protocol defines the header's size field as two bytes in network-byte-order and the server is handling the field as a raw short, then upon receiving a message that has a body size of 10 : 例如,如果协议将标头的size字段定义为按网络字节顺序的两个字节,并且服务器将其作为原始short来处理,则在接收到正文大小为10的消息时:
    • A big-endian machine will call async_read reading 10 bytes. 一台大端机将调用async_read读取10个字节。 The read operation should complete quickly as the socket already has the 10 byte body available for reading. 读取操作应迅速完成,因为套接字已具有10字节的正文可读取。
    • A little-endian machine will call async_read reading 2560 bytes. 一台async_read机器将调用async_read读取2560字节。 The read operation will likely remain outstanding, as far more bytes are trying to be read than is intended. 读取操作可能会保持未完成状态,因为尝试读取的字节超出了预期的范围。
  • Use tracing tools such as strace , ltrace , etc. 使用诸如straceltrace等的跟踪工具。
  • Modify Boost.Asio, adding timestamps throughout the callstack. 修改Boost.Asio,在整个调用堆栈中添加时间戳。 Boost.Asio is shipped as a header-file only library. Boost.Asio是作为仅头文件库提供的。 Thus, users may modify it to provide as much verbosity as desired. 因此,用户可以对其进行修改以提供所需的详细程度。 While not the cleanest or easiest of approaches, adding a print statement with timestamps throughout the callstack may help provide visibility into timing. 尽管不是最干净或最简单的方法,但在整个调用堆栈中添加带有时间戳的打印语句可能有助于提供计时的可见性。
  • Try duplicating the behavior in a short, simple, self contained example. 尝试在一个简短,简单的自包含示例中复制行为。 Start with the simplest of examples to determine if the delay is systamtic. 从最简单的示例开始,确定延迟是否为系统延迟。 Then, iteratively expand upon the example so that it becomes closer to the real-code with each iteration. 然后,对示例进行迭代扩展,以使其在每次迭代时都更接近真实代码。

Here is a simple example from which I started: 这是我开始的一个简单示例:

#include <iostream>

#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
#include <boost/shared_ptr.hpp>

class tcp_server
  : public boost::enable_shared_from_this< tcp_server >
{
private:

  enum 
  {
     header_size = 4,
     data_size   = 10,
     buffer_size = 1024,
     max_stamp   = 50
  };

  typedef boost::asio::ip::tcp tcp;

public:

  typedef boost::array< boost::posix_time::ptime, max_stamp > time_stamps;

public:

  tcp_server( boost::asio::io_service& service,
              unsigned short port )
    : strand_( service ),
      acceptor_( service, tcp::endpoint( tcp::v4(), port ) ),
      socket_( service ),
      index_( 0 )
  {}

  /// @brief Returns collection of timestamps.
  time_stamps& stamps()
  {
    return stamps_;
  }

  /// @brief Start the server.
  void start()
  {
    acceptor_.async_accept( 
      socket_,
      boost::bind( &tcp_server::handle_accept, this,
                   boost::asio::placeholders::error ) );
  }

private:

  /// @brief Accept connection.
  void handle_accept( const boost::system::error_code& error ) 
  {
    if ( error )
    {  
      std::cout << error.message() << std::endl;
      return;
    }

    read_header();
  }

  /// @brief Read header.
  void read_header()
  {
    boost::asio::async_read(
      socket_,
      boost::asio::buffer( buffer_, header_size ),
      boost::bind( &tcp_server::handle_read_header, this,
                   boost::asio::placeholders::error,
                   boost::asio::placeholders::bytes_transferred ) );
  }

  /// @brief Handle reading header.
  void
  handle_read_header( const boost::system::error_code& error,
                      std::size_t bytes_transferred )
  {
    if ( error )
    {  
      std::cout << error.message() << std::endl;
      return;
    }

    // If no more stamps can be recorded, then stop the async-chain so
    // that io_service::run can return.
    if ( !record_stamp() ) return;

    // Read data.
    boost::asio::async_read(
      socket_,
      boost::asio::buffer( buffer_, data_size ),
      boost::bind( &tcp_server::handle_read_data, this,
                   boost::asio::placeholders::error,
                   boost::asio::placeholders::bytes_transferred ) );

  }

  /// @brief Handle reading data.
  void handle_read_data( const boost::system::error_code& error,
                         std::size_t bytes_transferred )
  {
    if ( error )
    {  
      std::cout << error.message() << std::endl;
      return;
    }

    // If no more stamps can be recorded, then stop the async-chain so
    // that io_service::run can return.
    if ( !record_stamp() ) return;

    // Start reading header again.
    read_header();
  }

  /// @brief Record time stamp.
  bool record_stamp()
  {
    stamps_[ index_++ ] = boost::posix_time::microsec_clock::local_time();

    return index_ < max_stamp;
  }

private:
  boost::asio::io_service::strand strand_;
  tcp::acceptor acceptor_;
  tcp::socket socket_;
  boost::array< char, buffer_size > buffer_;
  time_stamps stamps_;
  unsigned int index_;
};


int main()
{
  boost::asio::io_service service;

  // Create and start the server.
  boost::shared_ptr< tcp_server > server =
    boost::make_shared< tcp_server >( boost::ref(service ), 33333 );  
  server->start();

  // Run.  This will exit once enough time stamps have been sampled.
  service.run();

  // Iterate through the stamps.
  tcp_server::time_stamps& stamps = server->stamps();
  typedef tcp_server::time_stamps::iterator stamp_iterator;
  using boost::posix_time::time_duration;
  for ( stamp_iterator iterator = stamps.begin() + 1,
                       end      = stamps.end();
        iterator != end;
        ++iterator )
  {
     // Obtain the delta between the current stamp and the previous.
     time_duration delta = *iterator - *(iterator - 1);
     std::cout << "Delta: " << delta.total_milliseconds() << " ms"
               << std::endl;
  }
  // Calculate the total delta.
  time_duration delta = *stamps.rbegin() - *stamps.begin();
  std::cout <<    "Total" 
            << "\n  Start: " << *stamps.begin()
            << "\n  End:   " << *stamps.rbegin()
            << "\n  Delta: " << delta.total_milliseconds() << " ms"
            << std::endl;
}

A few notes about the implementation: 有关实现的一些注意事项:

  • There is only one thread (main) and one asynchronous chain read_header->handle_read_header->handle_read_data . 只有一个线程(主线程)和一个异步链read_header-> handle_read_header-> handle_read_data This should minimize the amount of time a ready-to-run handler spends waiting for an available thread. 这应该使准备运行的处理程序花费在等待可用线程上的时间减至最少。
  • To focus on boost::asio::async_read , noise is minimized by: 为了专注于boost::asio::async_read ,通过以下方法将噪声降至最低:
    • Using a pre-allocated buffer. 使用预分配的缓冲区。
    • Not using shared_from_this() or strand::wrap . 不使用shared_from_this()strand::wrap
    • Recording the timestamps, and perform processing post-collection. 记录时间戳,并执行收集后的处理。

I compiled on CentOS 5.4 using gcc 4.4.0 and Boost 1.50. 我使用gcc 4.4.0和Boost 1.50在CentOS 5.4上进行了编译。 To drive the data, I opted to send 1000 bytes using netcat : 为了驱动数据,我选择使用netcat发送1000个字节:

$ ./a.out > output &
[1] 18623
$ echo "$(for i in {0..1000}; do echo -n "0"; done)" | nc 127.0.0.1 33333
[1]+  Done                    ./a.out >output
$ tail output
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Total
  Start: 2012-Sep-10 21:22:45.585780
  End:   2012-Sep-10 21:22:45.586716
  Delta: 0 ms

Observing no delay, I expanded upon the example by modifying the boost::asio::async_read calls, replacing this with shared_from_this() and wrapping the ReadHandlers s with strand_.wrap() . 观察无延迟,我通过修改后的例子扩展boost::asio::async_read调用,免去thisshared_from_this() ;和包装ReadHandlers s的strand_.wrap() I ran the updated example and still observed no delay. 我运行了更新的示例,但仍然没有发现延迟。 Unfortunately, that is as far as I could get based on the code posted in the question. 不幸的是,这是我根据问题中发布的代码所能得到的。

Consider expanding upon the example, adding in a piece from the real implementation with each iteration. 考虑扩展示例,在每次迭代中添加真实实现中的一部分。 For example: 例如:

  • Start with using the msg variable's type to control the buffer. 首先使用msg变量的类型来控制缓冲区。
  • Next, send valid data, and introduce parseHeader() and parsePacket functions. 接下来,发送有效数据,并介绍parseHeader()parsePacket函数。
  • Finally, introduce the lib::GET_SERVER_TIME() print. 最后,介绍lib::GET_SERVER_TIME()打印。

If the example code is as close as possible to the real code, and no delay is being observed with boost::asio::async_read , then the ReadHandler s may be ready-to-run in the real code, but they are waiting on synchronization (the strand) or a resource (a thread), resulting in a delay: 如果示例代码与真实代码尽可能接近,并且boost::asio::async_read没有观察到延迟,则ReadHandler可能已准备好在真实代码中运行,但它们正在等待同步(链)或资源(线程),导致延迟:

  • If the delay is the result of synchronization with the strand, then consider Robin 's suggestion by reading a larger block of data to potentially reduce the amount of reads required per-message. 如果延迟是与链同步的结果,请考虑Robin的建议,方法是读取更大的数据块,以潜在地减少每个消息所需的读取量。
  • If the delay is the result of waiting for a thread, then consider having an additional thread call io_service::run() . 如果延迟是等待线程的结果,请考虑另外调用io_service::run()

One thing that makes Boost.Asio awesome is using the async feature to the fullest. 使Boost.Asio很棒的一件事是充分利用了异步功能。 Relying on a specific number of bytes read in one batch, possibly ditching some of what could already been read, isn't really what you should be doing. 依靠一批中读取的特定字节数,可能放弃一些已经可以读取的字节,并不是您真正应该做的。

Instead, look at the example for the webserver especially this: http://www.boost.org/doc/libs/1_51_0/doc/html/boost_asio/example/http/server/connection.cpp 相反,请查看网络服务器的示例,尤其是以下示例: http : //www.boost.org/doc/libs/1_51_0/doc/html/boost_asio/example/http/server/connection.cpp

A boost triboolean is used to either a) complete the request if all data is available in one batch, b) ditch it if it's available but not valid and c) just read more when the io_service chooses to if the request was incomplete. 一个boost triboolean用于要么a)如果所有数据都可以在一批中完成,要么b)如果它可用但无效则放弃它; c)如果io_service选择请求不完整,则仅阅读更多内容。 The connection object is shared with the handler through a shared pointer. 连接对象通过共享指针与处理程序共享。

Why is this superior to most other methods? 为什么这优于大多数其他方法? You can possibly save the time between reads already parsing the request. 您可以节省已经解析请求的两次读取之间的时间。 This is sadly not followed through in the example but idealy you'd thread the handler so it can work on the data already available while the rest is added to the buffer. 遗憾的是,在示例中没有遵循此规则,但是理想情况下,您可以对处理程序进行线程化,以便它可以处理已经可用的数据,而将其余数据添加到缓冲区中。 The only time it's blocking is when the data is incomplete. 阻塞的唯一时间是数据不完整时。

Hope this helps, can't shed any light on why there is a 3ms delay between reads though. 希望这会有所帮助,但是不能阐明为什么读取之间会有3ms的延迟。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM