简体   繁体   中英

How do I determine the number of requests per second using aiohttp?

I'm trying to create a web traffic simulator using aiohttp . The following code sample makes 10k requests asynchronously. I want to know how many of them are happening concurrently so I can say this models 10k users requesting a website simultaneously.

How would I determine the number of concurrent network requests or how do I determine how many requests per second are made by aiohttp? Is there a way to debug/profile the number of concurrent requests in real time?

Is there a better way to model a web traffic simulator using any other programming language?

import asyncio
import aiohttp

async def fetch(session, url):
  with aiohttp.Timeout(10, loop=session.loop):
      async with session.get(url) as response:
            return await response.text()

async def run(r):
    url = "http://localhost:3000/"
    tasks = []

    # Create client session that will ensure we dont open new connection
    # per each request.
    async with aiohttp.ClientSession() as session:
        for i in range(r):
          html = await fetch(session, url)
          print(html)


# make 10k requests per second ?? (not confident this is true)
number = 10000
loop = asyncio.get_event_loop()
loop.run_until_complete(run(number))

Hi first there's a bug in the original code:

async with aiohttp.ClientSession() as session:
    for i in range(r):
      # This line (the await part) makes your code wait for a response
      # This means you done 1 concurent request
      html = await fetch(session, url)

If you fix the bug you'll get what you wanted - all of the requests will start at the same time.

You are going to hammer the service - unless using Semaphore / Queue.

anyway if that's what you want you can use this:

import asyncio
import aiohttp
import tqdm


async def fetch(session, url):
    with aiohttp.Timeout(10, loop=session.loop):
        async with session.get(url) as response:
            return await response.text()


async def run(r):
    url = "http://localhost:3000/"
    tasks = []
    # The default connection is only 20 - you want to stress...
    conn = aiohttp.TCPConnector(limit=1000)
    tasks, responses = [], []
    async with aiohttp.ClientSession(connector=conn) as session:
        tasks = [asyncio.ensure_future(fetch(session, url)) for _ in range(r)]
        #This will show you some progress bar on the responses 
        for f in tqdm.tqdm(asyncio.as_completed(tasks), total=len(tasks)):
            responses.append(await f)
    return responses

number = 10000
loop = asyncio.get_event_loop()
loop.run_until_complete(run(number))

Thanks to asyncio aiohttp progress bar with tqdm for the tqdm :)

I also suggest reading https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html for a better understanding of how coroutines work.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM