简体   繁体   English

Python Asyncio - 并行运行相同的任务

[英]Python Asyncio - Running the same task in parallel

I'm trying to run this code in parallel:我正在尝试并行运行此代码:

availablePrefix = {"http://URL-to-somewehere.com": "true", "http://URL-to-somewehere-else.com": "true"}
def main():
        while True:
            prefixUrl = getFreePrefix() # Waits until new url is free
            sendRequest("https://stackoverflow.com/", prefixUrl) 

def getFreePrefix():
        while True:
            for prefix in self.availablePrefix.keys(): 
                if availablePrefix.get(prefix) == "true":
                    availablePrefix[prefix] = "false" # Can't be used for another request
                    return prefix 

async def sendRequest(self, prefix, suffix):
        url = prefix + "/" +  suffix
        async with aiohttp.ClientSession() as session:
            async with session.get(url) as resp:
                response = await resp.text()
                availablePrefix[prefix] = "true" # Can be used again
                return json.loads(response)

Basically, I'm trying to run the main() function in parallel.基本上,我正在尝试并行运行 main() 函数。 The main() function is stuck until getFreePrefix() returns a new prefix (URL to my server). main() 函数被卡住,直到 getFreePrefix() 返回一个新的前缀(我的服务器的 URL)。 With the help of this prefix we can access my server and start a request.在这个前缀的帮助下,我们可以访问我的服务器并开始一个请求。 If this Prefix is used, it is set to false, to indicate that it can't be used for another request right now (If request is completed, it is set to true again).如果使用了这个Prefix,则设置为false,表示它现在不能用于其他请求(如果请求完成,则再次设置为true)。

What I want to achieve is, that every time a new prefix is ready, a new request is run in parallel.我想要实现的是,每次准备好新前缀时,都会并行运行一个新请求。

Thanks for helping!感谢您的帮助!

With your inconsistent use of self , I can't tell whether whether parts of your code is supposed to be part of a class or not.由于您对self使用不一致,我无法判断您的代码部分是否应该属于类的一部分。 It also appears that your intention is for function main to run in an infinite loop and as soon as a key of availablePrefix has been processed, it is available for processing again.看来您的意图是让函数main在无限循环中运行,并且一旦处理了availablePrefix的键,它就可以再次处理。 In your current, non-concurrent code, I believe that this could have been accomplished more simply as:在您当前的非并发代码中,我相信这可以更简单地完成:

# simple list:
availablePrefix = ["http://URL-to-somewehere.com", "http://URL-to-somewehere-else.com"]

def main():
    while True:
        for prefixUrl in availablePrefix:
            sendRequest("https://stackoverflow.com/", prefixUrl)

And you get rid of method getFreePrefix and remove the code from sendRequest that updates the heretofore availablePrefix dictionary, which is now a list.并且您摆脱了方法getFreePrefix并从sendRequest中删除了更新迄今为止availablePrefix getFreePrefix字典的代码,该字典现在是一个列表。 The other improvement I would make is to have the aiohttp.ClientSession() instance created only once in main and passed as an argument to whomever needs it.我要做的另一个改进是让aiohttp.ClientSession()实例只在main创建一次,并作为参数传递给需要它的任何人。

Moving on.继续。 To repeatedly process the prefixes concurrently, the simplest way I know is:要同时重复处理前缀,我知道的最简单的方法是:

import asyncio
import aiohttp


availablePrefix = ["http://URL-to-somewehere.com", "http://URL-to-somewehere-else.com"]

async def main():
    # create the session instance once and pass it as an argument:
    async with aiohttp.ClientSession() as session:
        while True:
            tasks = {asyncio.create_task(sendRequest(session, "https://stackoverflow.com/", prefixUrl)) for prefixUrl in availablePrefix}
            for task in asyncio.as_completed(tasks):
                result = await task

            
async def sendRequest(session, prefix, suffix):
    url = prefix + "/" +  suffix
    async with session.get(url) as resp:
        response = await resp.text()
        return json.loads(response)


await(main())

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM