简体   繁体   English

优化PHP + CURL连接时间

[英]Optimize PHP+CURL connection times

I have PHP scripts doing CURL POST requests to distant Nginx servers via HTTPS (several times per second). 我有PHP脚本通过HTTPS(每秒多次)向远程Nginx服务器执行CURL POST请求。

My issue is that each request needs 3 round-trips (TCP connection + SSL handshake) before the transfer can start, which significantly slows down the process. 我的问题是,每个请求都需要3次往返(TCP连接+ SSL握手)才能开始传输,这会大大减慢该过程。

Is there a way to do reduce this, for instance with some sort of "Keep-Alive" to avoid renegotiating TCP / SSL for each request? 有没有一种方法可以减少这种情况,例如使用某种“保持活动”来避免为每个请求重新协商TCP / SSL?

Thank you! 谢谢!

There is no way to keep a connection alive between two different PHP execution as the PHP script "die" at the end (thus closing any open socket), the only way to do what you want to achieve would be to have a background PHP script that never stops, takes care of fetching the data and put them into a database or a file that you will be able to easily and rapidly query later. 没有办法在两个不同的PHP执行之间保持有效连接,因为最后是PHP脚本“死”(因此关闭了所有打开的套接字),要做的唯一方法就是拥有一个后台PHP脚本永不停止,它会获取数据并将其放入数据库或文件中,以便您以后可以轻松快速地进行查询。

On another topic making multiple HTTPS request per second is maybe not the most efficient way to do it, if you have the hand on the server you query you might want to use WebSockets, that would allow you to make multiple queries per second without any major performance issue 关于另一个主题,每秒发出多个HTTPS请求可能不是最有效的方法,如果您动手查询服务器,则可能要使用WebSockets,这将允许您每秒进行多个查询而无需任何主要操作性能问题

I hope this answered you question, have a good day 希望这能回答您的问题,祝您有美好的一天

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM