繁体   English   中英

如何提高 python 中的多线程速度和效率?

[英]How can i improve my multithreading speed and effciency in python?

如何提高我的代码中的多线程速度?

我的代码需要 130 秒和 100 个线程来执行 700 个请求,假设我使用 100 个线程,这真的很慢而且令人沮丧。

我的代码编辑来自 url 的参数值并向它发出请求,包括原始 url(未编辑)从文件(urls.txt)接收网址

让我给你看一个例子:

让我们考虑以下 url:

https://www.test.com/index.php?parameter=value1&parameter2=value2

url 包含 2 个参数,因此我的代码将发出 3 个请求。

1 对原 url 的请求:

https://www.test.com/index.php?parameter=value1&parameter2=value2

1 对第一个修改值的请求:

https://www.test.com/index.php?parameter=replaced_value&parameter2=value2

1 请求第二个修改值:

https://www.test.com/index.php?parameter=value1&parameter2=replaced_value

我曾尝试为此使用asyncio ,但在concurrent.futures方面我取得了更大的成功

我什至尝试增加线程,我一开始认为这是问题,但在这种情况下,如果我会显着增加线程,那么脚本会在启动时冻结 30-50 秒,它确实没有像我预期的那样提高速度

我认为这是我如何构建多线程的代码问题,因为我看到其他人通过 concurrent.futures 实现了令人难以置信的速度

import requests
import uuid
from concurrent.futures import ThreadPoolExecutor, as_completed
import time

start = time.time()

headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'}
def make_request(url2):
    try:
        if '?' and '=':
            request_1 = requests.get(url2, headers=headers, timeout=10)
            url2_modified = url2.split("?")[1]
            times = url2_modified.count("&") + 1
            for x in range(0, times):
                split1 = url2_modified.split("&")[x]
                value = split1.split("=")[1]
                parameter = split1.split("=")[0]
                url = url2.replace('='+value, '=1') 
                request_2 = requests.get(url, stream=True, headers=headers, timeout=10)
                html_1 = request_1.text
                html_2 = request_2.text
                print(request_1.status_code + ' - ' + url2)
                print(request_2.status_code + ' - ' + url)

    except requests.exceptions.RequestException as e:
       return e


def runner():
    threads= []
    with ThreadPoolExecutor(max_workers=100) as executor:
        file1 = open('urls.txt', 'r', errors='ignore')
        Lines = file1.readlines()   
        count = 0
        for line in Lines:
            count += 1
            threads.append(executor.submit(make_request, line.strip()))
      
runner()

end = time.time()
print(end - start)

make_request的内部loop中,您运行正常的requests.get并且它不使用thread (或任何其他方法)来使其更快 - 因此它必须等待上一个请求的结束才能运行下一个请求。

make_request我使用另一个ThreadPoolExecutor在单独的thread中运行每个requests.get (在循环中创建)

executor.submit(make_modified_request, modified_url) 

它给了我时间~1.2s

如果我使用正常

make_modified_request(modified_url)

然后它给了我时间~3.2s


最小的工作示例:

我使用真实的网址https://httpbin.org/get所以每个人都可以简单地复制并运行它。

from concurrent.futures import ThreadPoolExecutor
import requests
import time
#import urllib.parse

# --- constansts --- (PEP8: UPPER_CASE_NAMES)

HEADERS = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'}

# --- functions ---

def make_modified_request(url):
    """Send modified url."""

    print('send:', url)
    response = requests.get(url, stream=True, headers=HEADERS)
    print(response.status_code, '-', url)
    html = response.text   # ???
    # ... code to process HTML ...

def make_request(url):
    """Send normal url and create threads with modified urls."""

    threads = []

    with ThreadPoolExecutor(max_workers=10) as executor:
            print('send:', url)

            # send base url            
            response = requests.get(url, headers=HEADERS)
            print(response.status_code, '-', url)
            html = response.text   # ???

            #parts = urllib.parse.urlparse(url)
            #print('query:',  parts.query)
            #arguments = urllib.parse.parse_qs(parts.query)
            #print('arguments:', arguments)   # dict  {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'e': ['E']}

            arguments = url.split("?")[1]
            arguments = arguments.split("&")
            arguments = [arg.split("=") for arg in arguments]
            print('arguments:', arguments)    # list [['a', 'A'], ['b', 'B'], ['c', 'C'], ['d', 'D'], ['e', 'E']]
             
            for name, value in arguments:
                modified_url = url.replace('='+value, '=1')
                print('modified_url:', modified_url)
                
                # run thread with modified url
                threads.append(executor.submit(make_modified_request, modified_url))
                
                # run normal function with modified url 
                #make_modified_request(modified_url)

    print('[make_request] len(threads):', len(threads))
    
def runner():
    threads = []
    
    with ThreadPoolExecutor(max_workers=10) as executor:
        #fh = open('urls.txt', errors='ignore')
        fh = [
            'https://httpbin.org/get?a=A&b=B&c=C&d=D&e=E', 
            'https://httpbin.org/get?f=F&g=G&h=H&i=I&j=J',
            'https://httpbin.org/get?k=K&l=L&m=M&n=N&o=O',
            'https://httpbin.org/get?a=A&b=B&c=C&d=D&e=E', 
            'https://httpbin.org/get?f=F&g=G&h=H&i=I&j=J',
            'https://httpbin.org/get?k=K&l=L&m=M&n=N&o=O',
           ]

        for line in fh:
            url = line.strip()
            # create thread with url
            threads.append(executor.submit(make_request, url))

    print('[runner] len(threads):', len(threads))

# --- main ---

start = time.time()

runner()

end = time.time()
print('time:', end - start)

顺便提一句:

我在想用单

executor = ThreadPoolExecutor(max_workers=10)

然后在所有函数中使用相同的executor器 - 也许它会运行得更快 - 但此时我没有工作代码。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM