简体   繁体   English

如何在龙卷风python中创建异步RequestHandler

[英]How make an async RequestHandler in tornado python

currently I'm working on my Backend websserver using tornado. 目前我正在使用龙卷风在我的后端网络服务器上工作。

The problem i have right now: 我现在遇到的问题:
- when a request is made and the server is processing the request all other request are blocked - 当发出请求并且服务器正在处理请求时,所有其他请求都被阻止

My RequestHandler: 我的RequestHandler:

class UpdateServicesRequestHandler( RequestHandler ):

    @gen.coroutine
    def get( self ):

        update = ServiceUpdate()
        response = yield update.update_all( )

        if self.request.headers.get('Origin'):
            self.set_header( 'Access-Control-Allow-Origin', self.request.headers.get('Origin') )
        self.set_header( 'Content-Type', 'application/json')
        self.write( response )

My update_all() : 我的update_all()

@gen.coroutine
def update_all( self ):

    for service in self.port_list:
        response = yield self.update_service( str( service.get( 'port' ) ) )
        self.response_list.append( response )

    self.response = json.dumps( self.response_list )

    return self.response

My update_sevice() : 我的update_sevice()

process = Popen( [ command ], stdout=PIPE, stderr=PIPE, shell=True )
output, error = process.communicate()

The thing is, that I need the result of the update_all() method. 问题是,我需要update_all()方法的结果。 So is there a possibility to make this request not block my whole server for requests? 那么是否有可能使此请求不阻止我的整个服务器请求?

Thank you! 谢谢!

In addition to using tornado.process.Subprocess as dano suggests, you should use stdout=tornado.process.Subprocess.STREAM instead of PIPE , and read from stdout/stderr asynchronously. 除了像dano建议的那样使用tornado.process.Subprocess之外,你应该使用stdout=tornado.process.Subprocess.STREAM而不是PIPE ,并且异步读取stdout / stderr。 Using PIPE will work for small amounts of output, but you will deadlock in wait_for_exit() if you use PIPE and the subprocess tries to write too much data (used to be 4KB but the limit is higher in most modern linux systems). 使用PIPE将适用于少量输出,但如果您使用PIPE并且子进程尝试写入太多数据(过去为4KB但在大多数现代Linux系统中限制更高wait_for_exit() ,您将在wait_for_exit()死锁。

process = Subprocess([command], 
    stdout=Subprocess.STREAM, stderr=Subprocess.STREAM,
    shell=True)
out, err = yield [process.stdout.read_until_close(),
    process.stderr.read_until_close()]

You need to use tornado's wrapper around subprocess.Popen to avoid blocking the event loop: 您需要使用龙卷风周围的包装subprocess.Popen以避免阻塞事件循环:

from tornado.process import Subprocess
from subprocess import PIPE
from tornado import gen

@gen.coroutine
def run_command(command):
    process = Subprocess([command], stdout=PIPE, stderr=PIPE, shell=True)
    yield process.wait_for_exit()  # This waits without blocking the event loop.
    out, err = process.stdout.read(), process.stderr.read()
    # Do whatever you do with out and err

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM