简体   繁体   English

有什么方法可以延长请求超时限制?

[英]Any way to extend request timeout limit?

I am getting 我正进入(状态

[2] [CRITICAL] WORKER TIMEOUT at=error code=H12 desc="Request timeout" method=POST dyno=web.1 connect=1ms service=30000ms [2] [CRITICAL] WORKER TIMEOUT at = error code = H12 desc =“Request timeout”method = POST dyno = web.1 connect = 1ms service = 30000ms

I am starting a flask web app on heroku with 我正在heroku上启动一个烧瓶网络应用程序

web: gunicorn server:app --timeout 60 --worker-class gevent --log-file=-

The --timeout flag doesn't seem to matter whether I use sync or gevent workers. --timeout标志似乎并没有不管我使用同步或GEVENT工人。 Any ideas how I can extend the request timeout limit? 我有什么想法可以延长请求超时限制?

Of course, I'd probably need to look into async handling of such long processes. 当然,我可能需要研究这种长进程的异步处理。

As answered on an official Heroku discussion (link is dead now), we can't set the timeout upper than 30 seconds: 正如Heroku官方讨论中所回答的那样 (现在链接已经死了),我们无法将超时设置为高于30秒:

Heroku kills all requests that take longer than 30s. Heroku杀死所有超过30秒的请求。 There is no way to change that behavior. 没有办法改变这种行为。


You need to redesign how you send your request by splitting your call into multiple, smaller chunks. 您需要通过将呼叫分成多个较小的块来重新设计发送请求的方式。 Javascript is the way to go. Javascript是要走的路。

This is not specific to Heroku but it is a bad idea in general to increase the timeout; 这不是Heroku特有的,但一般来说增加超时是一个坏主意; the idea being you should return a response as immediately as possible - and for anything that could take more than a few seconds you should accept the request, queue it for background processing and return a response so your client isn't blocked. 您应该尽可能立即返回响应 - 对于任何可能需要几秒钟的时间您应该接受请求,将其排队以进行后台处理并返回响应,以便您的客户端不被阻止。

For Heroku this means you have to spool up a worker process. 对于Heroku,这意味着你必须调整一个工作进程。 This is different than a web dyno which you already have in that it is designed to run in the background and does not have such timeout restrictions. 这与您已经拥有的web dyno不同,它设计为在后台运行并且没有这样的超时限制。

For a great over on this common pattern, there is an excellent writeup at the devcenter on this that details the entire process. 对于这个共同模式的重大改进,在这个有一个很好的文章 ,在整个过程中详细说明了这一点。

Specifically for Python on Heroku, this is implemented using redis queue . 特别是对于Heroku上的Python,这是使用redis队列实现的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM