简体   繁体   中英

How is HTTP/2 hop-by-hop flow control accomplished?

As the spec says:

Flow control is specific to a connection. Both types of flow control are between the endpoints of a single hop and not over the entire end-to-end path.

And in 6.9 WINDOW_UPDATE

Both types of flow control are hop by hop , that is, only between the two endpoints. Intermediaries do not forward WINDOW_UPDATE frames between dependent connections . However, throttling of data transfer by any receiver can indirectly cause the propagation of flow-control information toward the original sender.

But how is this even possible? It seems it requires all intermediaries to understand h2 or h2c protocol, and I've got two questions:

  1. HTTP/2 is a relatively new standard, and I've seen many websites have it enabled(my blog included). While I can visit these websites without any problem, does that mean every intermediary device along the way like routers and hubs etc already has implemented its own HTTP/2 stack and flow control algorithms(since RFC7540 doesn't stipulate a flow control algorithm)?

  2. Most websites use h2 rather than h2c , which encrypts application layer data. HTTP/2's flow control is done by receivers sending WINDOW_UPDATE frame, which is also application layer data, then how do intermediary devices know what these data is? If they can't decrypt data and see the Window Size Increment part, how do they accomplish flow control while not forwarding WINDOW_UPDATE frame?

在此处输入图片说明

First, a few corrections.

The token h2c refers to clear-text HTTP/2 (hence the c in h2c ). In your second bullet you say that most websites use it, but in fact very few do, because browsers don't implement it. The vast majority of web sites use h2 .

The token h2 refers to encrypted h2c , or equivalently h2c over TLS.

When a client and a server negotiate to speak h2 , the bytes that the client sends are encrypted and travel encrypted all the way to the server. This means that intermediaries do not have a chance to decrypt the traffic (thank you).

In this case, the "hop" referred to by the HTTP/2 specification is the whole network segment that is between the client and the server.

The HTTP/2 specification, however, needs to be generic and not worry about how browsers and web servers interact when defining a wire protocol such as HTTP/2.

Imagine a situation where the client performs a HTTP/2 request to server1 using h2 , and server1 needs to call server2 to fulfill the request, this time using h2c . For example, server1 could be a front-end "proxy" that forwards requests to the "right" back-end server depending on some logic.

In this case you have 2 hops: client-server1 and server1-server2.

Each hop applies its own flow control.

For example, imagine the client uploading a large file to the server. Typically, the client flow control send window is small, say the default 65535 octets. The client can only send up to 65535 octets before stalling the upload.

These 65535 octets are received by server1 . Now server1 becomes a client in order to communicate to server2 . Let's imagine that server1 's client has been configured with a much smaller flow control window when it communicates to server2 , say just 16384 octets.

In this example, server1 stalls the upload to server2 after 16384 octets, and must manage to keep around the remaining 65535-16384 octets waiting for server2 to notify (via a WINDOW_UPDATE frame) that the uploaded data has been consumed.

When server1 's client receives the WINDOW_UPDATE from server2 , it can send more data to server2 ; but also, it has to decide whether to send to the client a WINDOW_UPDATE (since its flow control window with the client has now room for additional 16384 octets) or wait a little more. For example, it could send another 16384 octets to server2 , and only upon receiving the second WINDOW_UPDATE from server2 can decide to send a WINDOW_UPDATE to the client (with an update of 16384+16384 octets).

As you can see from the example above, the flow control between the client and server1 is related to but independent from the flow control between server1 and server2 .

You may also want to read this answer for a discussion about flow control strategy implementations.

It depends on the meaning of hops/intermediaries.

If the intermediaries are on lower levels (TCP gateways, NATs, Switches, etc.) then they are transparent to HTTP/2, since the HTTP/2 flow control is applied end-to-end between a HTTP/2 client and server. They individual hops between might use lower level flow control mechanisms.

If your intermediary is an HTTP proxy then there are basically two seperated HTTP requests going on, where each applies it's own flow control. The proxy application has the responsibility to connect those individual hops while retaining the flow control properties. Eg by not reading the whole response from the second hop at once and only then forwarding it to the first hop but by streaming suitable chunks of data.

In case of HTTP proxies you even come to situations where you would proxy HTTP/1.1 to HTTP/2 and the other way around. In these situations they proxy would use the HTTP/2 flow control mechanisms to guarantee flow control for that hop and use TCP flow control to provide flow control on the other hop. If the protocol type is properly encapsulated in the proxy application (which means it would provide streaming operations which respect flow control for Request and Response types) then proxying the streams between the different protocol types should not be too hard.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM