简体   繁体   中英

How to maintain multiple run_forever handlers simultaneously?

Imagine you have a background processing daemon which could be controlled by a web interface.

So, the app is an object with some methods responsible for handling requests and one special method that needs to be called repeatedly from time to time, regardless of requests state.

When using aiohttp , the web part is fairly straightforward: you just instantiate an application instance, and set things up as per aiohttp.web.run_app source. Everything is clear. Now let's say your app instance has that special method, call it app.process , which is structured like so:

async def process(self):
    while self._is_running:
        await self._process_single_job()

With this approach you could call loop.run_until_complete(app.process()) , but it blocks, obviously, and thus leaves no opportunity to set up the web part. Surely, I could have split these two responsibilities into separate processes and establish their communication by means of a database, but that would complicate things, so I would prefer to avoid this way if at all possible.

So, how do I make the event loop call some method repeatedly while still running a web app?

You have to schedule the execution of app.process() as a task using loop.create_task :

import asyncio
from aiohttp import web

class MyApp(web.Application):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.process_task = self.loop.create_task(self.process())
        self.on_shutdown.append(lambda app: app.process_task.cancel())

    async def process(self):
        while True:
            print(await asyncio.sleep(1, result='ping'))

if __name__ == '__main__':
    web.run_app(MyApp())

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM