简体   繁体   中英

Will FastAPI application with only async endpoints encounter the GIL problem?

If all the fastapi endpoints are defined as async def , then there will only be 1 thread that is running right? (assuming a single uvicorn worker).

Just wanted to confirm in such a setup, we will never hit the python's Global Interpreter Lock. If the same was to be done in a flask framework with multiple threads for the single gunicorn worker, then we would be facing the GIL which hinders the true parallelism between threads.

So basically, in the above fastapi, the parallelism is limited to 1 since there is only one thread. And to make use of all the cores, we would need to increase the number of workers either using gunicorn or uvicorn.

Is my understanding correct?

Your understanding is correct. When using 1 worker with uvicorn, only one process is run. That means, there is only one thread that can take a lock on the interpreter that is running your application. Due to the asynchronous nature of your FastAPI app, it will be able to handle multiple simultaneous requests, but not in parallel.

If you want multiple instances of your application run in parallel, you can increase your workers. This will spin up multiple processes (all single threaded as above) and Uvicorn will distribute the requests among them.

Note that you cannot have shared global variables across workers. These are separate instances of your FastAPI app and do not communicate with each other. See this answer for more info on that and how to use databases or caches to work around that.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM