[英]How do I determine the number of requests per second using aiohttp?
I'm trying to create a web traffic simulator using aiohttp
.我正在尝试使用aiohttp
创建一个网络流量模拟器。 The following code sample makes 10k requests asynchronously.以下代码示例异步发出 10k 个请求。 I want to know how many of them are happening concurrently so I can say this models 10k users requesting a website simultaneously.我想知道其中有多少是同时发生的,所以我可以说这模拟了 10k 用户同时请求一个网站。
How would I determine the number of concurrent network requests or how do I determine how many requests per second are made by aiohttp?我将如何确定并发网络请求的数量或如何确定 aiohttp 每秒发出的请求数? Is there a way to debug/profile the number of concurrent requests in real time?有没有办法实时调试/分析并发请求的数量?
Is there a better way to model a web traffic simulator using any other programming language?有没有更好的方法来使用任何其他编程语言对网络流量模拟器进行建模?
import asyncio
import aiohttp
async def fetch(session, url):
with aiohttp.Timeout(10, loop=session.loop):
async with session.get(url) as response:
return await response.text()
async def run(r):
url = "http://localhost:3000/"
tasks = []
# Create client session that will ensure we dont open new connection
# per each request.
async with aiohttp.ClientSession() as session:
for i in range(r):
html = await fetch(session, url)
print(html)
# make 10k requests per second ?? (not confident this is true)
number = 10000
loop = asyncio.get_event_loop()
loop.run_until_complete(run(number))
Hi first there's a bug in the original code:嗨,首先在原始代码中有一个错误:
async with aiohttp.ClientSession() as session:
for i in range(r):
# This line (the await part) makes your code wait for a response
# This means you done 1 concurent request
html = await fetch(session, url)
If you fix the bug you'll get what you wanted - all of the requests will start at the same time.如果您修复了错误,您将得到您想要的东西——所有的请求都将同时开始。
You are going to hammer the service - unless using Semaphore / Queue.您将锤击该服务 - 除非使用信号量/队列。
anyway if that's what you want you can use this:无论如何,如果这是你想要的,你可以使用这个:
import asyncio
import aiohttp
import tqdm
async def fetch(session, url):
with aiohttp.Timeout(10, loop=session.loop):
async with session.get(url) as response:
return await response.text()
async def run(r):
url = "http://localhost:3000/"
tasks = []
# The default connection is only 20 - you want to stress...
conn = aiohttp.TCPConnector(limit=1000)
tasks, responses = [], []
async with aiohttp.ClientSession(connector=conn) as session:
tasks = [asyncio.ensure_future(fetch(session, url)) for _ in range(r)]
#This will show you some progress bar on the responses
for f in tqdm.tqdm(asyncio.as_completed(tasks), total=len(tasks)):
responses.append(await f)
return responses
number = 10000
loop = asyncio.get_event_loop()
loop.run_until_complete(run(number))
Thanks to asyncio aiohttp progress bar with tqdm for the tqdm :)感谢带有 tqdm 的 asyncio aiohttp 进度条用于 tqdm :)
I also suggest reading https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html for a better understanding of how coroutines work.我还建议阅读https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html以更好地了解协程的工作原理。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.