简体   繁体   中英

why use TCP use a basic timer which cannot be always exact whereas timer in java is always precise?

I am reading "TCP/IP illustrated", and the book illustrates that kernel implements a unit-timer like 500ms for TCP and all other TCP timers use this unit-timer. But the first period cannot be exact. For example, a retransmission timer is 12 units (6s), but the first time period may be in the middle so the actual time may be 5.5-6s.

On the contrary, when in java, it is easy to implement Thread.sleep(5000ms) which is precise and not a range.

So whey TCP timer cannot be exact whereas java can?

I kind of understand it and wanna check my answer. Java and Tcp timer both use a unit timer to count for the time period. The unit is limited by CPU which can be thousands of MHz meaning the clock is ns level. So the unit timer has a good choice of the granularity of the time. As for web service which tends to be slow, TCP chooses hundreds of ms while for java a tinier unit is chosen.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM