简体   繁体   中英

Measuring elapsed time over network

I have developed a server and client application for streaming video frames from one end to another using RTSP. Now, in order to gather statistics which will assist me in improving my applications, I need to measure the elapsed time between sending the frame and receiving the frame.

At the moment I am using the following formula:

Client_Receive_Timestamp - Server_Send_Timestamp = Elapsed_Time

Problem

It seems to me that the elapsed time is about 100-200ms too high. I think the reason is that the server clock and client clock are not in sync and have a difference of about 100-200ms.

Question

How can I accurately measure the elapsed time between the two mashines?

The topic Accurately measuring elapsed time between machines suggests calculating a round-trip delay. However, I can't use this solution as the client doesn't request the frames. It simply receives frames via RTSP.

Assuming

then you can simply subtract the "sent timestamp" from the "received timestamp" to obtain the latency duration. The observed error will be less than the sum of both clock errors. If the time scales are small enough (probably anything smaller than an hour) you can reasonably ignore slew effects.

If ntpd is not already running on both machines, and if you have the necessary permissions, then you can

$ sudo ntpdate -v pool.ntp.org

to force a synchronization with the pool of publicly-available time servers.

Then you can use the c++11 high_resolution_clock to calculate the duration:

/* hrc.cc */
#include <chrono>
#include <iostream>

int main(int,char**){
  using std::chrono::high_resolution_clock;
  // send something                                                                                                                      
  high_resolution_clock::time_point start = high_resolution_clock::now();
  std::cout << "time this" << std::endl ;
  // receive something                                                                                                                   
  high_resolution_clock::time_point stop = high_resolution_clock::now();
  std::cout
    << "duration == "
    << std::chrono::duration_cast<std::chrono::nanoseconds>(stop-start).count()
    << "ns"
    << std::endl
    ;
  return 0;
}

Here's what the previous example looks like on my system:

$ make hrc && ./hrc
c++     hrc.cc   -o hrc
time this
duration == 32010ns

I need to measure the elapsed time between sending the frame and receiving the frame.

You don't need precise timestamps for this. You can average the estimated latency.

If A sends the packet (or a frame) to B, B responds immediately (*) :

A (sendTime) ---> B ---> A (receivedTime)

then you can calculate the latency easily:

latency = (receivedTime - sendTime) / 2

This assumes of course that the latency is symmetrical. You can find more elaborate algorithms if you research "network latency estimation algorithm" phrases.

Having the estimated latency you can of course estimate time difference (but it doesn't seem necessary):

A (sendTime) ---> B (receivedTimeB) -- (receivedTimeB) --> A

timeDelta = sendTime + latency - receivedTimeB

Note that even if you average many results, this algorithm is probably highly biased. This is just posted as a simple example to the general idea.


(*) The fact that it does not happen really immediately induces an error of course. This depends on how heavily machine B is loaded.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM