简体   繁体   中英

How to coordinate max connections to web service across requests in PHP

I have programed for my application to make a call to an external rest service when a user makes a request. This allows me to get the data to serve there request. The service has requested that I limit the concurrent connections. How would I setup something like a resource pool in PHP to limit the concurrent calls?

使用APC ,具体为:

You would either have to have a centralized daemon to process the requests (best answer), or could have to keep some sort of count across the pool. This could be implemented by setting a memcache key that all machines in the pool could increment/decrement to keep a desired count.

Depending on your exact requirements, you could cache the response data. So either have a centralized deamon which is updating the rest service every time period (10 seconds, 60 seconds, etc). Then, your application simply pulls the cached data. That will only work if you're not sending user-specific requests obviously.

If you are sending user-specific requests, I would build an aggregator. Basically, when you want user-specific data, you make a request to a local aggregator. If the aggregator has a cached version of the response that's not too old, it returns immediately. If not, it will put the request into the outbound queue. Then, you can either come back and check later, or wait for the queue to finish processing. But beware of waiting since it could add significant load to your server if you have lots of connections...

And here goes one of the most annoying "features" of PHP - no application scope. Scripts are executed over and over again for each request without any chance of further management.

But to stop complaining and start being helpful: Use UNIX file locks. If you are limited to 5 concurrent connections to the external resource, create 5 empty files. Whenever you want to use them, try to acquire an exclusive lock on one of them (flock() function). If you can't, the current "script instance/thread" has to wait until the lock is released.

This is not really nice, but it's perfectly thread-safe as flock is atomic. I have my own wrapper for handling these "critical sections" which require so called thread lockout. The obvious downside of this approach is performance overhead and inability to launch multiple app servers without shared FS (then we do it either by shared FS or shared memchache space).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM