简体   繁体   English

龙卷风中的并发性是否可行?

[英]Is concurrency possible in tornado?

I understand tornado is a single threaded and non-Blocking server, hence requests are handled sequentially (except when using event driven approach for IO operation). 我知道tornado是一个单线程和非阻塞服务器,因此请求按顺序处理(除非使用事件驱动方法进行IO操作)。

Is there a way to process multiple requests parallel in tornado for normal(non-IO) execution. 有没有办法在龙卷风中并行处理多个请求以进行正常(非IO)执行。 I can't fork multiple process since I need a common memory space across requests. 我不能分叉多个进程,因为我需要跨请求的公共内存空间。

If its not possible please suggest to me other python servers which can handle parallel request and also supports wsgi. 如果不可能,请向我建议其他可以处理并行请求的python服务器,并且还支持wsgi。

If you are truly going to be dealing with multiple simultaneous requests that are compute-bound, and you want to do it in Python, then you need a multi-process server, not multi-threaded. 如果您真的要处理多个计算绑定的同时请求,并且您希望在Python中执行它,那么您需要一个多进程服务器,而不是多线程服务器。 CPython has Global Interpreter Lock (GIL) that prevents more than one thread from executing python bytecode at the same time. CPython具有全局解释器锁(GIL),可防止多个线程同时执行python字节码。

Most web applications do very little computation, and instead are waiting for I/O, either from the database, or the disk, or from services on other servers. 大多数Web应用程序执行的计算非常少,而是等待来自数据库,磁盘或其他服务器上的服务的I / O. Be sure you need to handle compute-bound requests before discarding Tornado. 确保在丢弃Tornado之前需要处理计算限制请求。

The answer to your question really depends on how long these compute-bound threads will be running for. 您的问题的答案实际上取决于这些计算绑定线程将运行多长时间。 If they're short running, and the rate of processing them at least matches the rate at which they arrive, then Tornado will be fine. 如果它们短时间运行,并且处理它们的速度至少与它们到达的速率相匹配,那么龙卷风就可以了。 It is truly single-threaded, but it does scale very well. 它确实是单线程的,但它确实可以很好地扩展。

If your compute-bound requests are long running, then using a threaded server won't necessarily help because, as Ned Batchelder already pointed out, the GIL will be a bottleneck. 如果您的计算绑定请求长时间运行,那么使用线程服务器不一定会有所帮助,因为正如Ned Batchelder已经指出的那样,GIL将成为瓶颈。

If you're able to relax the restriction of having the same memory space across all requests then you might consider running Tornado with PyZMQ , as it provides a way of running multiple Tornado back-ends, fronted by a single Tornado instance. 如果您能够放宽对所有请求具有相同内存空间的限制,那么您可以考虑使用PyZMQ运行Tornado,因为它提供了一种运行多个Tornado后端的方法,前面是一个Tornado实例。 This allows you to continue to use Tornado for the entire solution. 这使您可以继续使用Tornado来获得整个解决方案。 See PyZMQ's web.zmqweb module for more information. 有关更多信息,请参阅PyZMQ的web.zmqweb模块。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM