简体   繁体   中英

Throttle bandwidth on Socket of the receiving side (client)

I have a Java based client app that connects to a server and receive data.

My goal is to throttle the packets transfer rate.

In order to achieve this I simply use the following attribute:

Socket#setReceivedBufferSize(int)

I have gathered that if I set the received buffer to a small size it would prevent the "congestion window" from growing up therefore throttling the transfer rate.

Under this assumption, I've done some tests and it seems to be working.

My question is: Is this a valid approach to achieve this goal? Are there pros&cons...

Thank you!

Socket#setReceiveBufferSize(int) will be used as a hint. However, the actual implementation could override it if it is too small. You can verify this at runtime by using Socket#getReceiveBufferSize() .

To actually throttle the packet rate (server side), I would recommend using Google Guava's RateLimiter .

  final RateLimiter rateLimiter = RateLimiter.create(5000.0); // rate = 5000 permits per second
  void submitPacket(byte[] packet) {
    rateLimiter.acquire(packet.length);
    networkService.send(packet);
  }

On the client, you will want to read chunks as fast as possible. But this is determined by your specific use of the InputStream .

Also, regarding the subject of congestion windows, I found this white paper that I glanced over. If you want reading on an actual "research". For more concise reading. See https://en.wikipedia.org/wiki/TCP_congestion_control

If you want to throttle on the client and you are using TCP, you can simply read less. TCP flow control will compensate and send less data. See How do I implement client-side bandwidth throttling for FTP/HTTP?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM