简体   繁体   中英

Http connections slow down or deadlock with .NET HttpClient

We have an asp.net webapi application that needs to issue a lot of calls to other web applications (it's basically a reverse proxy). To do this we use the async methods of the HttpClient.

Yes, we have seen the hints about using only one HttpClient instance and not to dispose of it.

Yes, we have seen the hints about setting configuration values, especially the problem with the lease timeout. Currently we set ConnectionLimit = CPU*12, ConnectionLeaseTimeout = 5min and MaxIdleTime = 30s.

We can see that the connections behave as desired. The throughput in a load test was also very good. However we are facing issues where occasionally the connections stop working. It seems to happen when a lot of requests are coming in (and, being a reverse proxy, cause new requests to be issued) and it happens mostly (but not only) with the slowest of all backend applications. The behaviour is then that it takes forever to finish the requests to this endpoint or they simply end in a timeout.

An IISReset of the server hosting our reverse proxy application terminates the problems (for a while).

We have investigated in several areas already:

  • Performance issues of the remote web application: Although it behaves exactly as this would be the case the performance is good when the same requests are issued locally on the remote server. Also the values for CPU / network etc. are low.
  • Network issues (bandwidth, router, firewall, load balancers): Possible but rather unlikely since everything else runs stable and our hoster is involved in the analysis too.
  • Threadpool starvation: Not impossible but rather theoretical - sure we have a lot of async calls but shouldn't that help regarding this issue?
  • HttpCompletionOption.ResponseHeadersRead: Not a problem by itself but maybe one piece of the puzzle?

The best explanation so far focuses on the ConnectionLimit: We started setting the values mentioned above only recently and this seems to have triggered the problems. But why would it? Shouldn't it be an improvement to reuse the connections instead of opening a new one for every request? And the values we set seem to be rather conservative?

We have started to experiment with these values lately to see their impact in production. Yet it is still unclear to us if this is the only cause. And we'd appreciate a more straighforward approach for analysis. Unfortunately a memory dump and netstat printouts did not help any further.

Some suggestions about how to analyze or hints about possible causes would be highly appreciated.

***** EDIT *****

Setting the connection limit to 1000 is solving the issue! So the question remains as to why is that the case? From what we know the default connection limit is 2 in a non-web and 1000 in a web application. MS is suggesting a default value of CPU*12 (but they didn't implement it like that?!) so our change was basically to go from 1000 to 48. Still we can see that only a handful connections are open. Is there anyone who can shed some light on this? What is the exact behaviour about opening new connections, reusing existing ones, pipelining etc.? Is there any source of information for this?

ConnectionLimit means ServicePointManager.DefaultConnectionLimit ? Yes it matters. When the value is X, if there are already X requests waiting response, new request will not be sent until any previous request is finished.

I posted a follow up question here: How to disable pipelining for the .NET HttpClient

Unfortunately there were no real answers to any of my questions. We ended up leaving the ConnectionLimit at 1000 (which is a workaround only but the only solution we were able to find).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM