I have the following scenario:
from time import sleep
async def do_a(a):
sleep(0.001)
return 1*a
async def do_b(a):
sleep(0.01)
return 2*a
async def do_c(b):
sleep(1)
return 3*b
async def my_func():
results = []
for i in range(3):
a = await do_a(i)
b = await do_b(a)
c = await do_c(b)
results.append(c)
return results
if __name__ == "__main__":
import asyncio
print(asyncio.run(my_func()))
Basically, I am calling asynchronous functions in a loop. Executing the above code shows it run in ~3s. I would like to call each procedure in parallel, so the expected time would drop to ~1s (I know this overhead is a little bit too optimistic, but to optimize the running time at least a bit). I have been looking into different python libraries that I think could help, but having trouble deciding which one is useful in this case. Python's multiprocessing, threading and concurrent.futures all seem to implement one form or another of parallelism/concurrency. What should I do? Can you show me how you would proceed in this case?
You should use asyncio.sleep
instead of time.sleep
. If you want everything to run concurrently, this is one way you can do it with asyncio.gather
:
import asyncio
async def do_a(a):
await asyncio.sleep(0.001)
return 1*a
async def do_b(a):
await asyncio.sleep(0.01)
return 2*a
async def do_c(b):
await asyncio.sleep(1)
return 3*b
async def do_abc(i):
a = await do_a(i)
b = await do_b(a)
return await do_c(b)
async def my_func():
return await asyncio.gather(*map(do_abc, range(3)))
if __name__ == "__main__":
import asyncio
print(asyncio.run(my_func()))
# [0, 6, 12]
If the actual code that runs instead of sleep
is synchronous (blocking), you would do essentially the same, only you would have to defer that work to an executor .
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.