简体   繁体   English

多线程 python 请求

[英]Multi Threaded python requests

I have written this code:我写了这段代码:

 import requests
 redfile = open('/root/Desktop/23/1.txt','r')
 url = input('Type url :' )
 for x in redfile:

    req = requests.get(url + '/' + x.rstrip())
    if req.status_code == 200:
        print('this page' + url + '/' + x +'is avable')
        f = open('/root/Desktop/23/2.txt','a')
        f.write('\n')
        f.write(url+'/'+x)
        f.write('\n')
        f.close()

now I need a make the requests Multithread?现在我需要一个请求多线程? how can I do it?我该怎么做?

Maybe this is what you search for:也许这就是您要搜索的内容:

import requests
import threading


def thread(urls):
    while True:
        if len(urls) < 1:
            break
        url = urls.pop()
        r = requests.get(url)
        if r.ok:
            print('this page' + url  + 'is avable')
            with open('/root/Desktop/23/2.txt','a') as f:
                f.write('\n')
                f.write(url)
                f.write('\n')

urls = []
baseurl = input('Type url :' )
threads = []
for line in open('/root/Desktop/23/1.txt'):
    urls.append(baseurl + '/' + line.rstrip())
for i in range(10):
    t = threading.Thread(target=thread, args=(urls,))
    threads.append(t)
    t.start()
for t in threads:
    t.join()
print('Done')

Using the threading module like this make your script multi threaded.使用这样的线程模块使您的脚本成为多线程的。 This script will have 10 threads:该脚本将有 10 个线程:

for i in range(10):

I suggest using a thread pool and the concurrent.futures module is the most flexible:我建议使用线程池,并且concurrent.futures模块是最灵活的:

import requests
from concurrent.futures import ThreadPoolExecutor

# worker thread to process one url:
def proccess_url(url, session):
    r = session.get(url)
    if r.status_code == 200:
        print('this page' + url + ' is avable')
        with open('/root/Desktop/23/2.txt', 'a') as f:
            f.write('\n')
            f.write(url)
            f.write('\n')


url = input('Type url: ' )
#create all the urls:
with open('/root/Desktop/23/1.txt','r') as redfile:
    urls = [url + '/' + x.rstrip() for x in redfile]

with ThreadPoolExecutor(max_workers=len(urls)) as executor:
    session = requests.Session()
    futures = [executor.submit(process_url, url, session) for url in urls]
    for future in futures:
        future.result() # wait for each thread to end; worker thread implicitly returns None

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM