[英]Architecture Flask vs FastAPI
我一直在修補 Flask 和 FastAPI 以了解它如何充當服務器。
我想知道的主要事情之一是 Flask 和 FastAPI 如何處理來自多個客戶端的多個請求。
特別是當代碼存在效率問題(數據庫查詢時間長)時。
所以,我嘗試編寫一個簡單的代碼來理解這個問題。
代碼很簡單,當客戶端訪問路由時,應用程序在返回結果前會休眠 10 秒。
它看起來像這樣:
快速API
import uvicorn
from fastapi import FastAPI
from time import sleep
app = FastAPI()
@app.get('/')
async def root():
print('Sleeping for 10')
sleep(10)
print('Awake')
return {'message': 'hello'}
if __name__ == "__main__":
uvicorn.run(app, host="127.0.0.1", port=8000)
Flask
from flask import Flask
from flask_restful import Resource, Api
from time import sleep
app = Flask(__name__)
api = Api(app)
class Root(Resource):
def get(self):
print('Sleeping for 10')
sleep(10)
print('Awake')
return {'message': 'hello'}
api.add_resource(Root, '/')
if __name__ == "__main__":
app.run()
應用程序啟動后,我嘗試通過 2 個不同的 chrome 客戶端同時訪問它們。 以下是結果:
快速API
Flask
如您所見,對於 FastAPI,代碼首先等待 10 秒,然后再處理下一個請求。 而對於 Flask,代碼在 10 秒睡眠仍在發生時處理下一個請求。
盡管做了一些谷歌搜索,但這個話題並沒有真正的直接答案。
如果有人有任何評論可以闡明這一點,請將它們放在評論中。
您的意見都是值得贊賞的。 非常感謝大家的時間。
編輯對此的更新,我正在探索更多並發現了流程管理器的這個概念。 例如,我們可以使用進程管理器(gunicorn)運行 uvicorn。 通過添加更多的工人,我能夠實現像 Flask 這樣的東西。 然而,仍在測試這個極限。 https://www.uvicorn.org/deployment/
感謝所有留下評論的人。 欣賞它。
這似乎有點有趣,所以我用ApacheBench
進行了一些測試:
Flask
from flask import Flask
from flask_restful import Resource, Api
app = Flask(__name__)
api = Api(app)
class Root(Resource):
def get(self):
return {"message": "hello"}
api.add_resource(Root, "/")
快速API
from fastapi import FastAPI
app = FastAPI(debug=False)
@app.get("/")
async def root():
return {"message": "hello"}
我對 FastAPI 進行了 2 次測試,結果有很大不同:
gunicorn -w 4 -k uvicorn.workers.UvicornWorker fast_api:app
uvicorn fast_api:app --reload
所以這里是 5000 請求並發 500 的基准測試結果:
帶有 Uvicorn Workers 的 FastAPI
Concurrency Level: 500
Time taken for tests: 0.577 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 720000 bytes
HTML transferred: 95000 bytes
Requests per second: 8665.48 [#/sec] (mean)
Time per request: 57.700 [ms] (mean)
Time per request: 0.115 [ms] (mean, across all concurrent requests)
Transfer rate: 1218.58 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 4.5 6 30
Processing: 6 49 21.7 45 126
Waiting: 1 42 19.0 39 124
Total: 12 56 21.8 53 127
Percentage of the requests served within a certain time (ms)
50% 53
66% 64
75% 69
80% 73
90% 81
95% 98
98% 112
99% 116
100% 127 (longest request)
FastAPI - 純 Uvicorn
Concurrency Level: 500
Time taken for tests: 1.562 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 720000 bytes
HTML transferred: 95000 bytes
Requests per second: 3200.62 [#/sec] (mean)
Time per request: 156.220 [ms] (mean)
Time per request: 0.312 [ms] (mean, across all concurrent requests)
Transfer rate: 450.09 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 8 4.8 7 24
Processing: 26 144 13.1 143 195
Waiting: 2 132 13.1 130 181
Total: 26 152 12.6 150 203
Percentage of the requests served within a certain time (ms)
50% 150
66% 155
75% 158
80% 160
90% 166
95% 171
98% 195
99% 199
100% 203 (longest request)
對於 Flask :
Concurrency Level: 500
Time taken for tests: 27.827 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 830000 bytes
HTML transferred: 105000 bytes
Requests per second: 179.68 [#/sec] (mean)
Time per request: 2782.653 [ms] (mean)
Time per request: 5.565 [ms] (mean, across all concurrent requests)
Transfer rate: 29.13 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 87 293.2 0 3047
Processing: 14 1140 4131.5 136 26794
Waiting: 1 1140 4131.5 135 26794
Total: 14 1227 4359.9 136 27819
Percentage of the requests served within a certain time (ms)
50% 136
66% 148
75% 179
80% 198
90% 295
95% 7839
98% 14518
99% 27765
100% 27819 (longest request)
Flask :測試時間:27.827 秒
FastAPI - Uvicorn :測試時間:1.562 秒
FastAPI - Uvicorn Workers :測試時間:0.577 秒
使用 Uvicorn Workers FastAPI 比 Flask 快了近48 倍,這是非常可以理解的。 ASGI vs WSGI ,所以我跑了1個並發:
FastAPI - UvicornWorkers :測試時間:1.615 秒
FastAPI - Pure Uvicorn :測試時間:2.681 秒
Flask :測試時間:5.541 秒
Flask 帶女服務員
Server Software: waitress
Server Hostname: 127.0.0.1
Server Port: 8000
Document Path: /
Document Length: 21 bytes
Concurrency Level: 1000
Time taken for tests: 3.403 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 830000 bytes
HTML transferred: 105000 bytes
Requests per second: 1469.47 [#/sec] (mean)
Time per request: 680.516 [ms] (mean)
Time per request: 0.681 [ms] (mean, across all concurrent requests)
Transfer rate: 238.22 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 4 8.6 0 30
Processing: 31 607 156.3 659 754
Waiting: 1 607 156.3 658 753
Total: 31 611 148.4 660 754
Percentage of the requests served within a certain time (ms)
50% 660
66% 678
75% 685
80% 691
90% 702
95% 728
98% 743
99% 750
100% 754 (longest request)
Gunicorn 與 Uvicorn 工人
Server Software: uvicorn
Server Hostname: 127.0.0.1
Server Port: 8000
Document Path: /
Document Length: 19 bytes
Concurrency Level: 1000
Time taken for tests: 0.634 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 720000 bytes
HTML transferred: 95000 bytes
Requests per second: 7891.28 [#/sec] (mean)
Time per request: 126.722 [ms] (mean)
Time per request: 0.127 [ms] (mean, across all concurrent requests)
Transfer rate: 1109.71 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 28 13.8 30 62
Processing: 18 89 35.6 86 203
Waiting: 1 75 33.3 70 171
Total: 20 118 34.4 116 243
Percentage of the requests served within a certain time (ms)
50% 116
66% 126
75% 133
80% 137
90% 161
95% 189
98% 217
99% 230
100% 243 (longest request)
純 Uvicorn,但這次是 4 個工人uvicorn fastapi:app --workers 4
Server Software: uvicorn
Server Hostname: 127.0.0.1
Server Port: 8000
Document Path: /
Document Length: 19 bytes
Concurrency Level: 1000
Time taken for tests: 1.147 seconds
Complete requests: 5000
Failed requests: 0
Total transferred: 720000 bytes
HTML transferred: 95000 bytes
Requests per second: 4359.68 [#/sec] (mean)
Time per request: 229.375 [ms] (mean)
Time per request: 0.229 [ms] (mean, across all concurrent requests)
Transfer rate: 613.08 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 20 16.3 17 70
Processing: 17 190 96.8 171 501
Waiting: 3 173 93.0 151 448
Total: 51 210 96.4 184 533
Percentage of the requests served within a certain time (ms)
50% 184
66% 209
75% 241
80% 260
90% 324
95% 476
98% 504
99% 514
100% 533 (longest request)
您正在async
端點中使用 time.sleep time.sleep()
function。 time.sleep()
是阻塞的,永遠不應該在異步代碼中使用。 您應該使用的可能是asyncio.sleep()
function:
import asyncio
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get('/')
async def root():
print('Sleeping for 10')
await asyncio.sleep(10)
print('Awake')
return {'message': 'hello'}
if __name__ == "__main__":
uvicorn.run(app, host="127.0.0.1", port=8000)
這樣,每個請求將需要大約 10 秒才能完成,但您將能夠同時處理多個請求。
通常,異步框架為標准庫中的所有阻塞函數(睡眠函數、IO 函數等)提供替換。 您應該在編寫異步代碼和(可選) await
它們時使用這些替換。
一些非阻塞框架和庫(例如 gevent)不提供替代品。 他們代替標准庫中的猴子補丁函數,使它們成為非阻塞的。 但據我所知,對於較新的異步框架和庫,情況並非如此,因為它們旨在允許開發人員使用 async-await 語法。
我認為您在 FastAPI 中阻塞了一個事件隊列,它是異步框架,而在 Flask 中,請求可能每個都在新線程中運行。 將所有 CPU 綁定任務移動到單獨的進程或在您的 FastAPI 示例中只是在事件循環上休眠(不要在此處使用 time.sleep)。 在 FastAPI 中異步運行 IO 綁定任務
阻塞操作將停止您的事件循環運行任務。 當您調用sleep()
function 時,所有任務(請求)都在等待它完成,從而扼殺了異步代碼執行的所有好處。
要理解為什么這段代碼比較錯誤,我們應該更好地了解異步代碼在 Python 中是如何工作的,並且對 GIL 有一些了解。 並發和異步代碼在 FastAPI 的文檔中有很好的解釋。
@Asotos 描述了為什么您的代碼很慢,是的,您應該將協程用於 I/O 操作,因為它們會阻塞事件循環執行( sleep()
是阻塞操作)。 合理地建議使用異步函數,這樣事件循環就不會被阻塞,但目前,並非所有庫都有異步版本。
async
函數和asyncio.sleep
優化如果你不能使用庫的異步版本,你可以簡單地將你的路由函數定義為簡單的def
函數,而不是async def
。
如果路由 function 被定義為同步( def
),FastAPI 會在外部線程池中智能地調用這個 function,並且不會阻塞帶有事件循環的主線程,並且您的基准測試會更好,而不使用await asyncio.sleep()
. 本節詳細解釋。
from time import sleep
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get('/')
def root():
print('Sleeping for 10')
sleep(10)
print('Awake')
return {'message': 'hello'}
if __name__ == "__main__":
uvicorn.run(app, host="127.0.0.1", port=8000)
順便說一句,如果在線程池中運行的操作由於GIL而受 CPU 限制(例如繁重的計算),您將不會獲得很多好處。 CPU 密集型任務必須在單獨的進程中運行。
在 FastAPI 中實際需要做的實際上是在實際場景中使用后台任務,您可能會發送 email 或對數據庫進行大量查詢等
快速API
from fastapi import FastAPI, BackgroundTasks
import time
app = FastAPI()
def sleep(msg):
time.sleep(10)
print(msg)
@app.get('/')
async def root(background_tasks: BackgroundTasks):
msg= 'Sleeping for 10'
background_tasks.add_task(sleep, msg)
print('Awake')
return {'message': 'hello'}
然后嘗試檢查基准
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.