简体   繁体   中英

boost asio async_read delay (local socket)

I have a problem in socket programming with boost asio.

The flow of my program is below:

The client use async_write to send data to server, then it will use async_read to receive the data from server. All of the handler of async operations is use_future, and the timeout = 2 seconds.

The problem is:

The processing time of server is about 10 ms. Both the server and the client is in the same computer, so the client should receive the data while the server sends them out. However, I measured the latency of async_read and I found that some of them are about 10 ms and others are about 30 ~ 40 ms. Usually, they are interleaved.

How do I measured:

auto start_time = std::chrono::steady_clock::now();

auto read_result = async_read(..., use_future);
auto read_future_status = read_result.wait_for(timeout);

auto end_time = std::chrono::steady_clock::now();

I have tried 2 solutions :

  1. Allocate larger space to boost::asio::streambuf. I allocated with 4096 Bytes and my data size never exceed 1500 Bytes, but it doesn't work.
  2. Use socket_.set_option(ip::tcp::no_delay(true)); to turn off the Nagle's Algorithm and Delay Ack. Also, it cannot work.

I have no idea with the problem. Could anyone can help me?? Please...

Updates: Here is parts of my source code

Below is the code of send request to server:

auto send_result = async_write(input_socket, out_buffer, use_future);
auto send_status = send_result.wait_for(std::chrono::seconds(timeout));
if(send_status == std::future_status::timeout)
{
    LOG4CPLUS_ERROR_FMT(logger_, "send %s future error : (%d).", action, send_status);
}

out_buffer.consume(request.length());

Below is the code of receive data from server.

auto read_buffer_result = async_read(output_socket, in_buffer, transfer_exactly(sizeof(Header)), use_future);
auto read_status = read_buffer_result.wait_for(std::chrono::seconds(timeout));
if(read_status == std::future_status::timeout)
{
    LOG4CPLUS_ERROR_FMT(logger_, "read %s header future error : (%d).", action, read_status);
}
size_t byte_transferred = read_buffer_result.get();
in_buffer.consume(byte_transferred);

My first guess is that you're causing ASIO to block pending the full i/o to complete. That introduces latency.

For latency sensitive code you should always be using async_read_some() and async_write_some() . These will return as soon as possible any partial i/o so you complete as much as the OS can do as soon as it can do it. You'll need to refactor your code to handle partial i/o of course, basically handle as much as you can per handler invocation and keep around any buffer sequence of any undrained partially sent or received buffers for retry next time until it's spent.

My second guess is that you may be using strands. These introduce latency. See the asio-users mailing list.

Thanks for helping. I have found the answer --- I use a debug version to run... but something is still weird.

At first, I develop with debug mode. After finishing the develop, I switch to release mode and use "Build Solution" to build my dll. I thought the dll now is release version so I started to evaluate the performance. But this morning, I have no intention to use "Clean Solution" and then "Build Solution". After the building, I evaluated the performance again and found that almost all of the latency is 10 ms now. I guess the evaluation before this morning is done by dll with debug version.

I still have no idea with the difference between "switch from debug to release and build the solution" and "switch from debug to release and clean the solution then build the solution."

I will try to find out the difference.

Really appreciate for helping.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM