[英]I am not sure why multiprocessing Queue isn't working. What is it doing?
I am using python's built in socket and multiprocessing libaries to scan tcp ports of a host. 我正在使用python的内置套接字和多处理库来扫描主机的tcp端口。 I know my first function works, and I am just trying to make it work with multriprocess Queue and Process, not sure where I am going wrong.
我知道我的第一个功能可以正常工作,而我只是想使其与multriprocess Queue and Process一起使用,不确定我要去哪里。
If I remove the Queue
everything seems to complete, I just actually need to get the results from it. 如果删除
Queue
似乎一切都完成了,那么我实际上实际上需要从中获取结果。
from multiprocessing import Process, Queue
import socket
def tcp_connect(ip, port_number):
try:
scanner = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
scanner.settimeout(0.1)
scanner.connect((str(ip), port_number))
scanner.close()
#put into queue
## Remove line below if you
q.put(port_number)
except:
pass
RESULTS = []
list_of_numbs = list(range(1,501))
for numb in list_of_numbs:
#make my queue
q = Queue()
p = Process(target=tcp_connect, args=('google',numb))
p.start()
#take my results from my queue and append to a list
RESULTS.append(q.get())
p.join()
print(RESULTS)
I would just like it to print out port numbers that were open. 我只想打印出打开的端口号。 Right now since it is scanning google.com it should really only return
80
and 443
. 现在,由于它正在扫描google.com,因此它实际上应该仅返回
80
和443
。
EDIT: This would work if I used Pool but I went to Process and Queue is because the bigger piece of this runs in Django with celery and the don't allow Daemon when executing with Pool 编辑:如果我使用Pool的话这将起作用,但是我去了Process and Queue,因为其中更大的部分在celery中运行在Django中,并且在使用Pool执行时不允许守护进程
For work like this, a multiprocessing.Pool
would be a better way of handling it. 对于这样的工作,使用
multiprocessing.Pool
将是处理它的更好方法。
You don't have to worry about creating Processes and Queues; 您不必担心创建流程和队列。 all that is done for you in the background.
所有这些都在后台为您完成。 Your worker function only has to return a result, and that will be transported to the parent process for you.
您的worker函数只需要返回一个结果,该结果将为您传输到父进程。
I would suggest using multiprocessing.Pool.imap_unordered()
, because it starts returning results as soon as it is available. 我建议使用
multiprocessing.Pool.imap_unordered()
,因为一旦可用,它就会开始返回结果。
One thing; 一样东西; the worker process takes only one argument.
工作进程仅接受一个参数。 It you need multiple different arguments for each call;
每个调用都需要多个不同的参数。 wrap them in a tuple.
将它们包裹在一个元组中。 If you have arguments that are the same for all calls, use
functools.partial
. 如果所有调用的参数都相同,请使用
functools.partial
。
A slightly more modern aproach would be to use an Executor.map()
method from concurrent.futures
. 稍微更现代的形式给出了将使用
Executor.map()
方法从concurrent.futures
。 Since your work consists mainly of socket calls, you could use a ThreadPoolExecutor
here, I think. 我认为,由于您的工作主要包括套接字调用,因此您可以在此处使用
ThreadPoolExecutor
。 That should be slightly less resource-intensive than a ProcessPoolExecutor
. 这应该比
ProcessPoolExecutor
少一些资源。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.