简体   繁体   English

Erlang中同时产生1000个进程

[英]Spawning 1000 processes at the same time in Erlang

I want to spawn 1000 or a variable number of processes in Erlang.我想在 Erlang 中生成 1000 个或可变数量的进程。

server.erl:服务器.erl:

-module(server).
-export([start/2]).

start(LeadingZeroes, InputString) ->
    % io:format("Leading Zeroes: ~w", [LeadingZeroes]),
    % io:format("InputString: ~p", [InputString]).
    mineCoins(LeadingZeroes, InputString, 100).

mineCoins(LeadingZeroes, InputString, Target) ->
    PID = spawn(miner, findTargetHash(), []), % How to spawn this process 1000 times so that each process computes something and sends the results here
    PID ! {self(), {mine, LeadingZeroes, InputString, Target}},
    receive
        {found, Number} ->
            io:fwrite("Rectangle area: ~w", [Number]);
        % {square, Area} ->
        %     io:fwrite("Square area: ~w", [Area]);
        Other ->
            io:fwrite("In Other!")
    end.
    % io:fwrite("Yolo: ~w", [Square_Area]).

miner.erl (client): miner.erl(客户端):

-module(miner).
-export([findTargetHash/0]).

findTargetHash() ->
    receive
         {From , {mine, LeadingZeroes, InputString, Target}} ->
            % do something here
            From ! {found, Number};
        {From, {else, X}} ->
            io:fwrite("In Else area"),
            From ! {square, X*X}
    end,
    findTargetHash().

Here, I wish to spawn the processes, 1000 of them(miner), how does one achieve this?在这里,我希望生成进程,其中 1000 个(矿工),如何实现这一目标? Through list comprehensions or recursion or any other way?通过列表推导或递归或任何其他方式?

Generally, you can do something N times like this:一般来说,你可以做 N 次这样的事情:

-module(a).
-compile(export_all).

go(0) ->
    io:format("!finished!~n");
go(N) ->
    io:format("Doing something: ~w~n", [N]),
    go(N-1).

In the shell:在 shell 中:

3> c(a).   
a.erl:2:2: Warning: export_all flag enabled - all functions will be exported
%    2| -compile(export_all).
%     |  ^
{ok,a}

4> a:go(3).
Doing something: 3
Doing something: 2
Doing something: 1
!finished!
ok

If you need to start N processes and subsequently send messages to them, then you will need their pids to do that, so you will have to save their pids somewhere:如果您需要启动 N 个进程并随后向它们发送消息,那么您将需要它们的 pid 来执行此操作,因此您必须将它们的 pid 保存在某处:

go(0, Pids) ->
    io:format("All workers have been started.~n"),
    Pids;
go(N, Pids) ->
    Pid = spawn(b, worker, [self()]),
    go(N-1, [Pid|Pids]).


-module(b).
-compile(export_all).

worker(From) ->
    receive
        {From, Data} ->
            io:format("Worker ~w received ~w.~n", [self(), Data]), 
            From ! {self(), Data * 3};
        Other ->
            io:format("Error, received ~w.~n", [Other])
    end.

To start N=3 worker processes, you would call go/2 like this:要启动N=3工作进程,您可以像这样调用go/2

Pids = a:go(3, []).

That's a little bit awkward for someone who didn't write the code: why do I have to pass an empty list?对于没有编写代码的人来说,这有点尴尬:为什么我必须传递一个空列表? So, you could define a go/1 like this:因此,您可以像这样定义go/1

go(N) ->  go(N, []).

Then, you can start 3 worker processes by simply writing:然后,您只需编写以下代码即可启动 3 个工作进程:

Pids = go(3).

Next, you need to send each of the worker processes a message containing the work they need to do:接下来,您需要向每个工作进程发送一条消息,其中包含他们需要完成的工作:

do_work([Pid|Pids], [Data|Datum]) ->
    Pid ! {self(), Data},
    do_work(Pids, Datum);
do_work([], []) ->
    io:format("All workers have been sent their work.~n").

Finally, you need to gather the results from the workers:最后,您需要从工作人员那里收集结果:

gather_results([Worker|Workers], Results) ->
    receive
        {Worker, Result} ->
            gather_results(Workers, [Result|Results])
    end;
gather_results([], Results) ->
    Results.

A couple of things to note about gather_results/2 :关于gather_results/2需要注意的几点:

  1. The Worker variable in the receive has already been assigned a value in the head of the function, so the receive is not waiting for just any worker process to send a message, rather the receive is waiting for a particular worker process to send a message.接收中的Worker变量已经在 function 的头部分配了一个值,因此接收不只是等待任何工作进程发送消息,而是接收正在等待特定工作进程发送消息。

  2. The first Worker process in the list of Workers may be the longest running process, and you may wait in the receive for, say, 10 minutes for that process to finish, but then getting the results from the other worker processes will require no waiting. Workers列表中的第一个Worker进程可能是运行时间最长的进程,您可能会在接收中等待 10 分钟以使该进程完成,但随后从其他 Worker 进程获取结果将不需要等待。 Therefore, gathering all the results will essentially take as long as the longest process plus a few microseconds to loop through the other processes.因此,收集所有结果基本上将花费最长的过程加上几微秒来循环其他过程。 Similarly, for other orderings of the longest and shortest processes in the list, it will only take a time equal to the longest process plus a few microseconds to receive all the results.类似地,对于列表中最长和最短进程的其他排序,只需要等于最长进程加上几微秒的时间来接收所有结果。

Here is a test run in the shell:这是在 shell 中运行的测试:

27> c(a).                                                       
a.erl:2:2: Warning: export_all flag enabled - all functions will be exported
%    2| -compile(export_all).
%     |  ^

{ok,a}

28> c(b).                                                       
b.erl:2:2: Warning: export_all flag enabled - all functions will be exported
%    2| -compile(export_all).
%     |  ^

{ok,b}

29> Pids = a:go(3, []).                                         
All workers have been started.
[<0.176.0>,<0.175.0>,<0.174.0>]

30> a:do_work(Pids, [1, 2, 3]).                                 
All workers have been sent their work.
Worker <0.176.0> received 1.
Worker <0.175.0> received 2.
Worker <0.174.0> received 3.
ok

31> a:gather_results(Pids, []).                                 
[9,6,3]

 

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM