[英]Multiple Async Context Managers
Is it possible to combine async context managers in python? 是否可以在python中组合异步上下文管理器? Something similar to
asyncio.gather
, but able to be used with context managers. 类似于
asyncio.gather
东西,但能够与上下文管理器一起使用。 Something like this: 像这样的东西:
async def foo():
async with asyncio.gather_cm(start_vm(), start_vm()) as vm1, vm2:
await vm1.do_something()
await vm2.do_something()
Is this currently possible? 这目前可能吗?
Something close to gather_cm
can be achieved with AsyncExitStack
, introduced in Python 3.7: 一些接近
gather_cm
可以实现AsyncExitStack
,在Python 3.7介绍:
async def foo():
async with AsyncExitStack() as stack:
vm1, vm2 = await asyncio.gather(
stack.enter_async_context(start_vm()),
stack.enter_async_context(start_vm()))
await vm1.do_something()
await vm2.do_something()
Unfortunately, __aexit__
s will still be run sequentially. 不幸的是,
__aexit__
仍将按顺序运行。 This is because AsyncExitStack
simulates nested context managers, which have a well-defined order and cannot overlap. 这是因为
AsyncExitStack
模拟嵌套的上下文管理器,它具有明确定义的顺序且不能重叠。 The outer context manager's __aexit__
is given information on whether the inner one raised an exception. 外部上下文管理器的
__aexit__
被给出关于内部管理器是否引发异常的信息。 (A database handle's __aexit__
might use this to roll back the transaction in case of exception and commit it otherwise.) Running __aexit__
s in parallel would make the context managers overlap and the exception information unavailable or unreliable. (数据库句柄的
__aexit__
可能会使用它在异常情况下回滚事务并以其他方式提交。)并行运行__aexit__
会使上下文管理器重叠,异常信息不可用或不可靠。 So although gather(...)
runs __aenter__
s in parallel, AsyncExitStack
records which one came first and runs the __aexit__
s in reverse order. 因此,尽管
gather(...)
并行运行__aenter__
, AsyncExitStack
记录哪一个先出现并以相反的顺序运行__aexit__
。
With async context managers an alternative like gather_cm
would make perfect sense. 使用异步上下文管理器,像
gather_cm
这样的替代方案将非常有意义。 One could drop the nesting semantics and provide an aggregate context manager that worked like an "exit pool" rather than a stack. 可以删除嵌套语义并提供聚合上下文管理器,其工作方式类似于“退出池”而不是堆栈。 The exit pool takes a number of context manager that are independent of each other, which allows their
__aenter__
and __aexit__
methods to be run in parallel. 退出池采用多个彼此独立的上下文管理器,这允许它们的
__aenter__
和__aexit__
方法并行运行。
The tricky part is handling exceptions correctly: If any __aenter__
raises, the exception must be propagated to prevent the with
block from being run. 棘手的部分是正确处理异常:如果任何
__aenter__
引发,则必须传播异常以防止with
块被运行。 To ensure correctness, the pool must guarantee that __aexit__
will be invoked on all the context managers whose __aenter__
has completed. 为了确保正确性,池必须保证
__aexit__
将在所有情况下的经理,其被调用__aenter__
已完成。
Here is an example implementation: 这是一个示例实现:
import asyncio
import sys
class gather_cm:
def __init__(self, *cms):
self._cms = cms
async def __aenter__(self):
futs = [asyncio.create_task(cm.__aenter__())
for cm in self._cms]
await asyncio.wait(futs)
# only exit the cms we've successfully entered
self._cms = [cm for cm, fut in zip(self._cms, futs)
if not fut.cancelled() and not fut.exception()]
try:
return tuple(fut.result() for fut in futs)
except:
await self._exit(*sys.exc_info())
raise
async def _exit(self, *args):
# don't use gather() to ensure that we wait for all __aexit__s
# to complete even if one of them raises
done, _pending = await asyncio.wait(
[cm.__aexit__(*args)
for cm in self._cms if cm is not None])
return all(suppress.result() for suppress in done)
async def __aexit__(self, *args):
# Since exits are running in parallel, so they can't see each
# other exceptions. Send exception info from `async with`
# body to all.
return await self._exit(*args)
This test program shows how it works: 该测试程序显示了它的工作原理:
class test_cm:
def __init__(self, x):
self.x = x
async def __aenter__(self):
print('__aenter__', self.x)
return self.x
async def __aexit__(self, *args):
print('__aexit__', self.x, args)
async def foo():
async with gather_cm(test_cm('foo'), test_cm('bar')) as (cm1, cm2):
print('cm1', cm1)
print('cm2', cm2)
asyncio.run(foo())
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.