简体   繁体   English

龙卷风:BaseHandler.write是否被阻止?

[英]tornado: is BaseHandler.write blocking?

I have two requesthandler. 我有两个requesthandler。 One that delivers a huge amount auf data, the other only a few datasets. 一个提供大量auf数据,另一个仅提供少量数据集。

class HugeQueryHandler(BaseHandler):
    @gen.coroutine
    def get(self):
        try:
            cursor = yield momoko.Op(self.db.execute, 'SELECT * FROM huge_table;')
            for row in cursor:
                self.write('Query results: {} <br />'.format(row))
        except Exception as error:
            self.write(str(error))

        self.finish()

.

class SmallQueryHandler(BaseHandler):

    @gen.coroutine
    def get(self):
        try:
            cursor = yield momoko.Op(self.db.execute, 'SELECT * FROM small_table;')
            for row in cursor:
                self.write('Query results: {} <br />'.format(row))
        except Exception as error:
            self.write(str(error))

        self.finish()

My Question : 我的问题

Is the responding for loop blocking? 是否响应循环阻塞? When i request the small amount of data after a call of the huge handler, i have to wait, for the first one to be finished ... 当我在调用庞大的处理程序后请求少量数据时,我必须等待,直到第一个完成。

write() does not block on the network (it just appends to a buffer), but you're not yielding anywhere so the entire loop must run to completion before any other task can run. write()不会在网络上阻塞(它只是追加到缓冲区),但是您不会在任何地方屈服,因此必须运行整个循环才能完成其他任何任务。 I think the problem is not the write but the iteration - "for row in cursor" does not yield so either momoko has buffered the entire result set in memory or you are blocking while reading from the database. 我认为问题不是写,而是迭代-“游标中的行”不会产生,因此momoko已将整个结果集缓存在内存中,或者您正在从数据库读取时阻塞。 If the latter, you need to access the cursor in a non-blocking way. 如果是后者,则需要以非阻塞方式访问游标。 If the former, there may not be much you can do about it besides break the query up into smaller chunks. 如果是前者,除了将查询分成较小的块外,您可能无能为力。 (you could occasionally call "yield gen.Task(self.flush)" during the loop, but this would prolong the time that the full amount is buffered in memory so it may not be advisable). (您有时可以在循环中调用“ yield gen.Task(self.flush)”,但这会延长将全部内存缓冲在内存中的时间,因此可能不建议这样做)。

So, that 's the point. 所以,这就是重点。 The for loop needs to complete. for循环需要完成。

What is about such an approach? 这种方法是什么?

class HugeQueryHandler(BaseHandler):

    executor = tornado.concurrent.futures.ThreadPoolExecutor(1)

    @tornado.concurrent.run_on_executor
    def generate_response(self, cursor):
        return "<br />".join("{}".format(row) for row in cursor)

    @tornado.web.asynchronous
    @gen.engine
    def get(self):
        try:
            cursor = yield momoko.Op(self.db.execute, 'SELECT * FROM huge_table;')
            res = yield self.generate_response(cursor)
            self.write(res)
        except Exception as error:
            self.write(str(error))
        self.finish()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM