简体   繁体   中英

Retrieving data from python's coroutine object

I am trying to learn async, and now I am trying to get whois information for a batch of domains. I found this lib aiowhois , but there are only a few strokes of information, not enough for such newbie as I am.

This code works without errors, but I don't know how to print data from parsed whois variable, which is coroutine object.

resolv = aiowhois.Whois(timeout=10)

async def coro(url, sem):
    parsed_whois = await resolv.query(url)

async def main():
    tasks = []
    sem = asyncio.Semaphore(4)

    for url in domains:
        task = asyncio.Task(coro(url, sem))
        tasks.append(task)
    await asyncio.gather(*tasks)

loop = asyncio.get_event_loop()
loop.run_until_complete(main())

You can avoid using tasks. Just apply gather to the coroutine directly. In case you are confused about the difference, this SO QA might help you (especially the second answer).

You can have each coroutine return its result, without resorting to global variables:

async def coro(url):
    return await resolv.query(url)

async def main():
    domains = ...
    ops = [coro(url) for url in domains]
    rets = await asyncio.gather(*ops)
    print(rets)

Please see the official docs to learn more about how to use gather or wait or even more options

Note: if you are using the latest python versions, you can also simplify the loop running with just

asyncio.run(main())

Note 2: I have removed the semaphore from my code, as it's unclear why you need it and where.

all_parsed_whois = []  # make a global

async def coro(url, sem):
    all_parsed_whois.append(await resolv.query(url))

If you want the data as soon as it is available you could task.add_done_callback()

python asyncio add_done_callback with async def

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM