简体   繁体   中英

Python Tornado max_buffer_size across all requests

I know I can set max_buffer_size in Tornado to limit the amount of data that can be uploaded to the server. But what I am trying to do is restrict the total amount of data across all requests to my Tornado server.

For example, I have 500 simultaneous requests being sent to my Tornado server. Each request is uploading 1MB of data. I want my Tornado server to reject connections when >150MB of data has been received across all requests. So the first 150 requests will be received, but then the next 350 will be rejected by Tornado before buffering any of that data into memory.

Is it possible to do this in Tornado?

There's not currently a way to set a global limit like this (but it might be a nice thing to add).

The best thing you can do currently is to ensure that the memory used by each connection stays low: set a low default max_body_size , and for RequestHandlers that need to receive more data than that, use @stream_request_body and in prepare() call self.request.connection.set_max_body_size(large_value) . With the @stream_request_body decorator, each connection's memory usage will be limited by the chunk_size parameter instead of reading the whole body at once. Then in your data_recieved method you can await an allocation from a global semaphore to control memory usage beyond the chunk size per connection.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM