简体   繁体   English

Erlang mnesia数据库访问

[英]Erlang mnesia database access

I have designed a mnesia database with 5 different tables. 我设计了一个包含5个不同表的mnesia数据库。 The idea is to simulate queries from many nodes (computers) not just one, at the moment from the terminal i can execute a query, but I just need help on how i can make it such that I am requesting information from many computers. 我的想法是模拟来自许多节点(计算机)的查询而不仅仅是一个,从终端我可以执行查询,但我只需要帮助我如何能够使我从多台计算机请求信息。 I am testing for scalability and want to investigate the performance of mnesia vs other databases. 我正在测试可伸缩性,并希望调查mnesia与其他数据库的性能。 Any idea will be highly appreciated. 任何想法都将受到高度赞赏。

The best way to test mnesia is by using an intensive threaded job both on the local Erlang Node where mnesia is running and on the remote nodes. 测试mnesia的最佳方法是在运行mnesia的本地Erlang节点和远程节点上使用密集线程作业。 Usually, you want to have remote nodes using RPC calls in which reads and writes are being executed on mnesia tables. 通常,您希望使用RPC calls来访问远程节点,其中读取和写入正在mnesia表上执行。 Of-course, with high concurrency comes a trade off; 当然,高并发性需要权衡; speed of transactions will reduce, many may be retried as the locks may be many at a given time; 交易速度会降低,许多可能会被重试,因为在给定时间锁可能很多; But mnesia will ensure that all processes receive an {atomic,ok} for each transactional call they make. 但是mnesia将确保所有进程都为他们进行的每个事务调用接收{atomic,ok}

The Concept 这个概念
I propose that we have a non-blocking overload with both Writes and reads in directed to each mnesia table by as many processes as possible. 我建议我们有一个非阻塞重载,其中写入和读取都通过尽可能多的进程定向到每个mnesia表。 We measure the time difference between the call to the write function and the time it takes for our massive mnesia subscriber to get a Write Event. 我们测量write函数的调用与我们的大量mnesia订阅者获取Write事件所花费的时间之间的时间差。 These Events are sent by mnesia every after a successful Transaction and so we need not interrupt the working/overloading processes but rather let a "strong" mnesia subscriber to wait for asynchronous events reporting successful deletes and writes as soon as they occur. 这些事件是由mnesia在成功交易后每次发送的,因此我们不需要中断工作/重载过程,而是让“强”mnesia订阅者等待异步事件报告成功删除和写入。
The technique here is that we take the time stamp at the point just before calling a write function and then we note down the record key , the write CALL timestamp . 这里的技术是我们在调用写入函数之前的时间点采用时间戳,然后记下record key ,即write CALL timestamp Then our mnesia subscriber would note down the record key , the write/read EVENT timestamp . 然后我们的mnesia用户会记下record key ,即write/read EVENT timestamp Then the time difference between these two time stamps (lets call it: CALL-to-EVENT Time ) would give us a rough idea of how loaded, or how efficient we are going. 然后,这两个时间戳之间的时差(让我们称之为: CALL-to-EVENT Time )将让我们大致了解我们的加载方式或效率。 As locks increase with Concurrency, we should be registering increasing CALL-to-EVENT Time parameter. 随着锁增加并发,我们应该注册增加CALL-to-EVENT Time参数。 Processes doing writes (unlimited) will do so concurrently while those doing reads will also continue to do so without interruptions. 执行写入(无限制)的进程将同时执行,而执行读取的进程也将继续执行此操作而不会中断。 We will choose the number of processes for each operation but lets first lay ground for the entire test case. 我们将为每个操作选择进程数,但首先要为整个测试用例奠定基础。
All the above Concept is for Local operations (processes running on the same Node as Mnesia) 以上所有概念都适用于本地操作(与Mnesia在同一节点上运行的进程)

--> Simulating Many Nodes - >模拟许多节点
Well, i have personally not simulated Nodes in Erlang, i have always worked with real Erlang Nodes on the Same box or on several different machines in a networked environment. 好吧,我个人没有在Erlang中模拟节点,我一直在同一个盒子上或在网络环境中的几台不同的机器上使用真正的Erlang节点。 However, i advise that you look closely on this module: 但是,我建议你密切关注这个模块: http://www.erlang.org/doc/man/slave.html , concentrate more on this one here: http://www.erlang.org/doc/man/slave.html ,在这里更多地关注这一点: http://www.erlang.org/doc/man/ct_slave.html , and look at the following links as they talk about creating, simulating and controlling many nodes under another parent node ( http://www.erlang.org/doc/man/ct_slave.html ,并查看以下链接,因为他们谈论创建,模拟和控制另一个父节点下的许多节点( http://www.erlang.org/doc/man/pool.html , http://www.erlang.org/doc/man/pool.html , Erlang: starting slave node , Erlang:启动从节点 https://support.process-one.net/doc/display/ERL/Starting+a+set+of+Erlang+cluster+nodes , https://support.process-one.net/doc/display/ERL/Starting+a+set+of+Erlang+cluster+nodes http://www.berabera.info/oldblog/lenglet/howtos/erlangkerberosremctl/index.html ). http://www.berabera.info/oldblog/lenglet/howtos/erlangkerberosremctl/index.html )。 I will not dive into a jungle of Erlang Nodes here bacause it also another complicated topic but i will concentrate on tests on the same node running mnesia. 我不会潜入这里的Erlang Nodes丛林,因为这也是另一个复杂的话题,但我将专注于运行mnesia的同一节点上的测试。 I have come up with the above mnesia test concept and here, lets start implementing it. 我已经提出了上面的mnesia测试概念,在这里,让我们开始实现它。

Now, First of all, you need to make a test plan for each table (separate). 现在,首先,您需要为每个表制作一个测试计划(单独)。 This should include both writes and reads. 这应包括写入和读取。 Then you need to decide whether you want to do dirty operations or transactional operations on the tables. 然后,您需要决定是否要对表执行脏操作或事务操作。 You need to test speed of traversing a mnesia table in relation to its size. 您需要测试与其大小相关的遍历mnesia表的速度。 Lets take an example of a simple mnesia table 让我们举一个简单的mnesia表的例子

-record(key_value,{key,value,instanceId,pid}).

We would want to have a general function for writing into our table, here below: 我们希望有一个通用函数来写入我们的表,如下所示:

write(Record)->
    %% Use mnesia:activity/4 to test several activity
    %% contexts (and if your table is fragmented)
    %% like the commented code below
    %%
    %%  mnesia:activity(
    %%      transaction, %% sync_transaction | async_dirty | ets | sync_dirty
    %%      fun(Y) -> mnesia:write(Y) end,
    %%      [Record],
    %%      mnesia_frag
    %%  )
    mnesia:transaction(fun() -> ok = mnesia:write(Record) end).

And for our reads, we will have: 对于我们的阅读,我们将:

read(Key)->
    %% Use mnesia:activity/4 to test several activity
    %% contexts (and if your table is fragmented)
    %% like the commented code below
    %%
    %%  mnesia:activity(
    %%      transaction, %% sync_transaction | async_dirty| ets | sync_dirty
    %%      fun(Y) -> mnesia:read({key_value,Y}) end,
    %%      [Key],
    %%      mnesia_frag
    %%  )
    mnesia:transaction(fun() -> mnesia:read({key_value,Key}) end).
Now, we want to write very many records into our small table. 现在,我们想在我们的小表中写入很多记录。 We need a key generator. 我们需要一个密钥生成器。 This key generator will be our own pseudo-random string generator. 这个密钥生成器将是我们自己的伪随机字符串生成器。 However, we need our generator to tell us the instant it generates a key so we record it. 但是,我们需要我们的生成器告诉我们它生成密钥的瞬间,以便我们记录它。 We want to see how long it takes to write a generated key. 我们想看看编写生成的密钥需要多长时间。 Lets put it down like this: 让我们这样说:
-record(read_instance,{
        instance_id,
        before_read_time,
        after_read_time,
        read_time       %% after_read_time - before_read_time

    }).

-record(write_instance,{
        instance_id,
        before_write_time,
        after_write_time,
        write_time          %% after_write_time - before_write_time
    }).

-record(benchmark,{
        id,         %% {pid(),Key}
        read_instances = [],
        write_instances = []
    }).

subscriber()->
    mnesia:subscribe({table,key_value, simple}),

    %% lets also subscribe for system
    %% events because events passing through
    %% mnesia:event/1 will go via
    %% system events. 

    mnesia:subscribe(system),
    wait_events().

-include_lib("stdlib/include/qlc.hrl").

wait_events()->
receive
    {From,{key,write,Key,TimeStamp,InstanceId}} -> 
        %% A process is just about to call
        %% mnesia:write/1 and so we note this down
        Fun = fun() -> 
                case qlc:e(qlc:q([X || X <- mnesia:table(benchmark),X#benchmark.id == {From,Key}])) of
                    [] -> 
                        ok = mnesia:write(#benchmark{
                                id = {From,Key},
                                write_instances = [
                                        #write_instance{
                                            instance_id = InstanceId,
                                            before_write_time = TimeStamp                                               
                                        }]
                                }),
                                ok;
                    [Here] -> 
                        WIs = Here#benchmark.write_instances,
                        NewInstance = #write_instance{
                                        instance_id = InstanceId,
                                        before_write_time = TimeStamp                                               
                                    },
                        ok = mnesia:write(Here#benchmark{write_instances = [NewInstance|WIs]}),
                        ok                          
                end
            end,
        mnesia:transaction(Fun),
        wait_events();      
    {mnesia_table_event,{write,#key_value{key = Key,instanceId = I,pid = From},_ActivityId}} ->
        %% A process has successfully made a write. So we look it up and 
        %% get timeStamp difference, and finish bench marking that write
        WriteTimeStamp = timestamp(),
        F = fun()->
                [Here] = mnesia:read({benchmark,{From,Key}}),
                WIs = Here#benchmark.write_instances,
                {_,WriteInstance} = lists:keysearch(I,2,WIs),
                BeforeTmStmp = WriteInstance#write_instance.before_write_time,
                NewWI = WriteInstance#write_instance{
                            after_write_time = WriteTimeStamp,
                            write_time = time_diff(WriteTimeStamp,BeforeTmStmp)
                        },
                ok = mnesia:write(Here#benchmark{write_instances = [NewWI|lists:keydelete(I,2,WIs)]}),
                ok
            end,
        mnesia:transaction(F),
        wait_events();      
    {From,{key,read,Key,TimeStamp,InstanceId}} ->
        %% A process is just about to do a read
        %% using mnesia:read/1 and so we note this down
        Fun = fun()-> 
                case qlc:e(qlc:q([X || X <- mnesia:table(benchmark),X#benchmark.id == {From,Key}])) of
                    [] -> 
                        ok = mnesia:write(#benchmark{
                                id = {From,Key},
                                read_instances = [
                                        #read_instance{
                                            instance_id = InstanceId,
                                            before_read_time = TimeStamp                                                
                                        }]
                                }),
                                ok;
                    [Here] -> 
                        RIs = Here#benchmark.read_instances,
                        NewInstance = #read_instance{
                                        instance_id = InstanceId,
                                        before_read_time = TimeStamp                                            
                                    },
                        ok = mnesia:write(Here#benchmark{read_instances = [NewInstance|RIs]}),
                        ok
                end
            end,
        mnesia:transaction(Fun),
        wait_events();
    {mnesia_system_event,{mnesia_user,{read_success,#key_value{key = Key},From,I}}} ->
        %% A process has successfully made a read. So we look it up and 
        %% get timeStamp difference, and finish bench marking that read
        ReadTimeStamp = timestamp(),
        F = fun()->
                [Here] = mnesia:read({benchmark,{From,Key}}),
                RIs = Here#benchmark.read_instances,
                {_,ReadInstance} = lists:keysearch(I,2,RIs),
                BeforeTmStmp = ReadInstance#read_instance.before_read_time,
                NewRI = ReadInstance#read_instance{
                            after_read_time = ReadTimeStamp,
                            read_time = time_diff(ReadTimeStamp,BeforeTmStmp)
                        },
                ok = mnesia:write(Here#benchmark{read_instances = [NewRI|lists:keydelete(I,2,RIs)]}),
                ok
            end,
        mnesia:transaction(F),
        wait_events();  
    _ -> wait_events();
end.

time_diff({A2,B2,C2} = _After,{A1,B1,C1} = _Before)->        
    {A2 - A1,B2 - B1,C2 - C1}.
To make very many concurrent writes, we need a function which will be executed by many processes we will spawn. 要进行非常多的并发写入,我们需要一个将由我们将生成的许多进程执行的函数。 In this function, its desirable NOT to put any blocking functions such as sleep/1 usually implemented as sleep(T)-> receive after T -> true end. 在这个函数中,它希望不把任何阻塞函数如sleep/1通常实现为sleep(T)-> receive after T -> true end. . Such a function would make a processes execution to hang for the specified milliseconds. 这样的函数会使进程执行挂起指定的毫秒数。 mnesia_tm does the lock control, retry, blocking, etc on behalf of the processes to avoid dead locks. mnesia_tm代表进程执行锁定控制,重试,阻塞等,以避免死锁。 Lets say, we want each processes to write an unlimited amount of records . 可以说,我们希望每个进程写入unlimited amount of records Here is our function: 这是我们的功能:

\n-define(NO_OF_PROCESSES,20). -define(NO_OF_PROCESSES,20)。\n\nstart_write_jobs()-> start_write_jobs() - >\n    [spawn(?MODULE,generate_and_write,[]) || [spawn(?MODULE,generate_and_write,[])|| _ <- lists:seq(1,?NO_OF_PROCESSES)], _ < -  lists:seq(1,?NO_OF_PROCESSES)],\n    ok. 好。\n\ngenerate_and_write()-> generate_and_write() - > \n    %% remember that in the function ?MODULE:guid/0, %%记得在函数中?MODULE:guid / 0,\n    %% we inform our mnesia_subscriber about our generated key %%我们通知我们的mnesia_subscriber我们生成的密钥\n    %% together with the timestamp of the generation just before %%与前一代的时间戳一起 \n    %% a write is made. %%写入。\n    %% The subscriber will note this down in an ETS Table and then %%订户将在ETS表中记下这一点然后\n    %% wait for mnesia Event about the write operation. %%等待mnesia关于写操作的事件。 Then it will 然后它会\n    %% take the event time stamp and calculate the time difference %%获取事件时间戳并计算时差\n    %% From there we can make judgement on performance. %%从那里我们可以对表现做出判断。 \n    %% In this case, we make the processes make unlimited writes %%在这种情况下,我们使进程无限制写入 \n    %% into our mnesia tables. %%进入我们的mnesia表。 Our subscriber will trap the events as soon as 我们的订户将尽快捕获事件\n    %% a successful write is made in mnesia %%在mnesia中成功写入\n    %% For all keys we just write a Zero as its value %%对于所有键,我们只写一个零作为其值 
\n {Key,Instance} = guid(), {Key,Instance} = guid(),\n write(#key_value{key = Key,value = 0,instanceId = Instance,pid = self()}), write(#key_value {key = Key,value = 0,instanceId = Instance,pid = self()}),\n generate_and_write(). generate_and_write()。\n

Likewise, lets see how the read jobs will be done. 同样,让我们​​看看如何完成读取作业。 We will have a Key provider, this Key provider keeps rotating around the mnesia table picking only keys, up and down the table it will keep rotating. 我们将有一个密钥提供商,这个密钥提供商一直在mnesia表周围旋转,只选择键,在桌子的上下都会保持旋转。 Here is its code: 这是它的代码:

\nfirst()-> mnesia:dirty_first(key_value). first() - > mnesia:dirty_first(key_value)。\n\nnext(FromKey)-> mnesia:dirty_next(key_value,FromKey). next(FromKey) - > mnesia:dirty_next(key_value,FromKey)。\n\nstart_key_picker()-> register(key_picker,spawn(fun() -> key_picker() end)). start_key_picker() - > register(key_picker,spawn(fun() - > key_picker()end))。\n\nkey_picker()-> key_picker() - >\n    try ?MODULE:first() of 试试?MODULE:第一个()      \n        '$end_of_table' -> '$ end_of_table' - > \n            io:format("\\n\\tTable is empty, my dear !~n",[]), io:format(“\\ n \\ tTable是空的,亲爱的!~n”,[]),\n            %% lets throw something there to start with %%让我们先在那里扔东西\n            ?MODULE:write(#key_value{key = guid(),value = 0}), ?MODULE:write(#key_value {key = guid(),value = 0}),\n            key_picker(); key_picker();\n        Key -> wait_key_reqs(Key) 键 - > wait_key_reqs(键)\n    catch 抓住\n        EXIT:REASON -> 退出:原因 - > \n            error_logger:error_info(["Key Picker dies",{EXIT,REASON}]), error_logger:error_info([“Key Picker die”,{EXIT,REASON}]),\n            exit({EXIT,REASON}) 出口({EXIT,原因})\n    end. 结束。\n\nwait_key_reqs('$end_of_table')-> wait_key_reqs( '$ end_of_table') - >\nreceive 接收\n    {From,<<"get_key">>} -> {From,<<“get_key”>>}  - > \n        Key = ?MODULE:first(), Key =?MODULE:first(),\n        From ! 来自! {self(),Key}, {自(),关键},\n        wait_key_reqs(?MODULE:next(Key)); wait_key_reqs(MODULE:下一个(关键));\n    {_,<<"stop">>} -> exit(normal) {_,<<“停止”>>}  - >退出(正常)\nend; 结束;\nwait_key_reqs(Key)-> wait_key_reqs(键) - >\nreceive 接收\n    {From,<<"get_key">>} -> {From,<<“get_key”>>}  - > \n        From ! 来自! {self(),Key}, {自(),关键},\n        NextKey = ?MODULE:next(Key), NextKey =?MODULE:next(Key),\n        wait_key_reqs(NextKey); wait_key_reqs(NEXTKEY);\n    {_,<<"stop">>} -> exit(normal) {_,<<“停止”>>}  - >退出(正常)\nend. 结束。\n\nkey_picker_rpc(Command)-> key_picker_rpc(命令) - >\n    try erlang:send(key_picker,{self(),Command}) of 尝试erlang:send(key_picker,{self(),Command})\n        _ -> _  - > \n            receive 接收\n                {_,Reply} -> Reply {_,回复}  - >回复\n            after timer:seconds(60) -> 在计时器之后:秒(60) - > \n                %% key_picker hang, or too busy %% key_picker挂起,或者太忙了\n                erlang:throw({key_picker,hanged}) 二郎:掷({key_picker,上吊})\n            end 结束\n    catch 抓住\n        _:_ -> _:_  - > \n            %% key_picker dead %% key_picker死了\n            start_key_picker(), start_key_picker()\n            sleep(timer:seconds(5)), 休眠(定时器:秒(5)),\n            key_picker_rpc(Command) key_picker_rpc(命令)\n    end. 结束。\n\n%% Now, this is where the reader processes will be %%现在,这是读者进程的所在\n%% accessing keys. %%访问密钥。 It will appear to them as though 对他们来说似乎是好像\n%% its random, because its one process doing the %%是随机的,因为它的一个进程在做 \n%% traversal. %%遍历。 It will all be a game of chance 这将是一场机会游戏\n%% depending on the scheduler's choice %%取决于调度程序的选择\n%% he who will have the next read chance, will 将会有下一个阅读机会的%%\n%% win ! %%赢了! okay, lets get going below :) 好吧,让我们开始吧:)\n\nget_key()-> get_key() - > \n    Key = key_picker_rpc(<<"get_key">>), Key = key_picker_rpc(<<“get_key”>>),\n\n    %% lets report to our "massive" mnesia subscriber %%让我们向“大规模”的mnesia用户报告\n    %% about a read which is about to happen %%关于即将发生的读取\n    %% together with a time stamp. %%和时间戳。\n    Instance = generate_instance_id(), Instance = generate_instance_id(),\n    mnesia_subscriber ! mnesia_subscriber! {self(),{key,read,Key,timestamp(),Instance}}, {自(){键,阅读,重点,时间戳(),实例}},\n    {Key,Instance}. {关键字,实例}。 \n

Wow !!! 哇 !!! Now we need to create the function where we will start all the readers. 现在我们需要创建我们将启动所有读者的功能。

\n-define(NO_OF_READERS,10). -define(NO_OF_READERS,10)。\n\nstart_read_jobs()-> start_read_jobs() - >\n    [spawn(?MODULE,constant_reader,[]) || [spawn(?MODULE,constant_reader,[])|| _ <- lists:seq(1,?NO_OF_READERS)], _ < -  lists:seq(1,?NO_OF_READERS)],\n    ok. 好。\n\nconstant_reader()-> constant_reader() - >\n    {Key,InstanceId} = ?MODULE:get_key(), {Key,InstanceId} =?MODULE:get_key(),\n    Record = ?MODULE:read(Key), Record =?MODULE:读取(Key),\n    %% Tell mnesia_subscriber that a read has been done so it creates timestamp %%告诉mnesia_subscriber读取已完成,因此它会创建时间戳\n    mnesia:report_event({read_success,Record,self(),InstanceId}), Mnesia的:report_event({read_success,记录,自(),INSTANCEID}),   \n    constant_reader(). constant_reader()。\n

Now, the biggest part; 现在,最重要的部分; mnesia_subscriber !!! mnesia_subscriber !!! This is a simple process that will subscribe to simple events. 这是一个订阅简单事件的简单过程。 Get mnesia events documentation from the mnesia users guide. 从mnesia用户指南中获取mnesia事件文档。 Here is the mnesia subscriber 这是mnesia用户

\n-record(read_instance,{ -record(read_instance,{\n        instance_id, INSTANCE_ID,\n        before_read_time, before_read_time,\n        after_read_time, after_read_time,\n        read_time %% after_read_time - before_read_time read_time %% after_read_time  -  before_read_time\n\n    }). })。\n\n-record(write_instance,{ -record(write_instance,{\n        instance_id, INSTANCE_ID,\n        before_write_time, before_write_time,\n        after_write_time, after_write_time,\n        write_time %% after_write_time - before_write_time write_time %% after_write_time  -  before_write_time\n    }). })。\n\n-record(benchmark,{ -record(基准,{\n        id, %% {pid(),Key} id,%% {pid(),Key}\n        read_instances = [], read_instances = [],\n        write_instances = [] write_instances = []\n    }). })。\n\nsubscriber()-> 订户() - >\n    mnesia:subscribe({table,key_value, simple}), mnesia:subscribe({table,key_value,simple}),\n\n    %% lets also subscribe for system %%还允许订阅系统\n    %% events because events passing through %%事件,因为事件通过\n    %% mnesia:event/1 will go via %% mnesia:event / 1将通过\n    %% system events. %%系统事件。 \n\n    mnesia:subscribe(system), 函数mnesia:订阅(系统),\n    wait_events(). wait_events()。\n\n-include_lib("stdlib/include/qlc.hrl"). -include_lib( “STDLIB /包含/ qlc.hrl”)。\n\nwait_events()-> wait_events() - >\nreceive 接收\n    {From,{key,write,Key,TimeStamp,InstanceId}} -> {From,{key,write,Key,TimeStamp,InstanceId}}  - > \n        %% A process is just about to call %%一个进程即将调用\n        %% mnesia:write/1 and so we note this down %% mnesia:写/ 1所以我们注意到这一点\n        Fun = fun() -> Fun = fun() - > \n                case qlc:e(qlc:q([X || X <- mnesia:table(benchmark),X#benchmark.id == {From,Key}])) of case qlc:e(qlc:q([X || X < -  mnesia:table(benchmark),X#benchmark.id == {From,Key}]))\n                    [] -> []  - > \n                        ok = mnesia:write(#benchmark{ ok = mnesia:write(#benchmark {\n                                id = {From,Key}, id = {From,Key},\n                                write_instances = [ write_instances = [\n                                        #write_instance{ #write_instance {\n                                            instance_id = InstanceId, instance_id = InstanceId,\n                                            before_write_time = TimeStamp before_write_time = TimeStamp                                               \n                                        }] }]\n                                }), }),\n                                ok; 好;\n                    [Here] -> [这里]  - > \n                        WIs = Here#benchmark.write_instances, WIs =这里#benchmark.write_instances,\n                        NewInstance = #write_instance{ NewInstance = #write_instance {\n                                        instance_id = InstanceId, instance_id = InstanceId,\n                                        before_write_time = TimeStamp before_write_time = TimeStamp                                               \n                                    }, },\n                        ok = mnesia:write(Here#benchmark{write_instances = [NewInstance|WIs]}), ok = mnesia:write(这里是#benchmark {write_instances = [NewInstance | WIs]}),\n                        ok                           \n                end 结束\n            end, 结束,\n        mnesia:transaction(Fun), Mnesia的:交易(FUN)\n        wait_events(); wait_events();      \n    {mnesia_table_event,{write,#key_value{key = Key,instanceId = I,pid = From},_ActivityId}} -> {mnesia_table_event,{write,#key_value {key = Key,instanceId = I,pid = From},_ ActivityId}}  - >\n        %% A process has successfully made a write. %%一个进程已成功写入。 So we look it up and 所以我们一看就知道了 \n        %% get timeStamp difference, and finish bench marking that write %%得到timeStamp的差异,并完成写入的替补标记\n        WriteTimeStamp = timestamp(), WriteTimeStamp = timestamp(),\n        F = fun()-> F = fun() - >\n                [Here] = mnesia:read({benchmark,{From,Key}}), [here] = mnesia:read({benchmark,{From,Key}}),\n                WIs = Here#benchmark.write_instances, WIs =这里#benchmark.write_instances,\n                {_,WriteInstance} = lists:keysearch(I,2,WIs), {_,WriteInstance} = lists:keysearch(I,2,WIs),\n                BeforeTmStmp = WriteInstance#write_instance.before_write_time, BeforeTmStmp = WriteInstance#write_instance.before_write_time,\n                NewWI = WriteInstance#write_instance{ NewWI = WriteInstance#write_instance {\n                            after_write_time = WriteTimeStamp, after_write_time = WriteTimeStamp,\n                            write_time = time_diff(WriteTimeStamp,BeforeTmStmp) write_time = time_diff(WriteTimeStamp,BeforeTmStmp)\n                        }, },\n                ok = mnesia:write(Here#benchmark{write_instances = [NewWI|lists:keydelete(I,2,WIs)]}), ok = mnesia:write(这里#pritical {write_instances = [NewWI | lists:keydelete(I,2,WIs)]}),\n                ok \n            end, 结束,\n        mnesia:transaction(F), Mnesia的:交易(F),\n        wait_events(); wait_events();      \n    {From,{key,read,Key,TimeStamp,InstanceId}} -> {From,{key,read,Key,TimeStamp,InstanceId}}  - >\n        %% A process is just about to do a read %%一个进程即将进行读取\n        %% using mnesia:read/1 and so we note this down %%使用mnesia:read / 1,所以我们注意到这一点\n        Fun = fun()-> Fun = fun() - > \n                case qlc:e(qlc:q([X || X <- mnesia:table(benchmark),X#benchmark.id == {From,Key}])) of case qlc:e(qlc:q([X || X < -  mnesia:table(benchmark),X#benchmark.id == {From,Key}]))\n                    [] -> []  - > \n                        ok = mnesia:write(#benchmark{ ok = mnesia:write(#benchmark {\n                                id = {From,Key}, id = {From,Key},\n                                read_instances = [ read_instances = [\n                                        #read_instance{ #read_instance {\n                                            instance_id = InstanceId, instance_id = InstanceId,\n                                            before_read_time = TimeStamp before_read_time = TimeStamp                                                \n                                        }] }]\n                                }), }),\n                                ok; 好;\n                    [Here] -> [这里]  - > \n                        RIs = Here#benchmark.read_instances, RIs =这里#benchmark.read_instances,\n                        NewInstance = #read_instance{ NewInstance = #read_instance {\n                                        instance_id = InstanceId, instance_id = InstanceId,\n                                        before_read_time = TimeStamp before_read_time = TimeStamp                                            \n                                    }, },\n                        ok = mnesia:write(Here#benchmark{read_instances = [NewInstance|RIs]}), ok = mnesia:write(此处为#benchmark {inst_instances = [NewInstance | RIs]}),\n                        ok \n                end 结束\n            end, 结束,\n        mnesia:transaction(Fun), Mnesia的:交易(FUN)\n        wait_events(); wait_events();\n    {mnesia_system_event,{mnesia_user,{read_success,#key_value{key = Key},From,I}}} -> {mnesia_system_event,{mnesia_user,{read_success,#key_value {key = Key},From,I}}}  - >\n        %% A process has successfully made a read. %%一个进程已成功读取。 So we look it up and 所以我们一看就知道了 \n        %% get timeStamp difference, and finish bench marking that read %%得到timeStamp的差异,并完成阅读的替补标记\n        ReadTimeStamp = timestamp(), ReadTimeStamp = timestamp(),\n        F = fun()-> F = fun() - >\n                [Here] = mnesia:read({benchmark,{From,Key}}), [here] = mnesia:read({benchmark,{From,Key}}),\n                RIs = Here#benchmark.read_instances, RIs =这里#benchmark.read_instances,\n                {_,ReadInstance} = lists:keysearch(I,2,RIs), {_,ReadInstance} = lists:keysearch(I,2,RIs),\n                BeforeTmStmp = ReadInstance#read_instance.before_read_time, BeforeTmStmp = ReadInstance#read_instance.before_read_time,\n                NewRI = ReadInstance#read_instance{ NewRI = ReadInstance#read_instance {\n                            after_read_time = ReadTimeStamp, after_read_time = ReadTimeStamp,\n                            read_time = time_diff(ReadTimeStamp,BeforeTmStmp) read_time = time_diff(ReadTimeStamp,BeforeTmStmp)\n                        }, },\n                ok = mnesia:write(Here#benchmark{read_instances = [NewRI|lists:keydelete(I,2,RIs)]}), ok = mnesia:write(这里#pritical {read_instances = [NewRI | lists:keydelete(I,2,RIs)]}),\n                ok \n            end, 结束,\n        mnesia:transaction(F), Mnesia的:交易(F),\n        wait_events(); wait_events();  \n    _ -> wait_events(); _  - > wait_events();\nend. 结束。\n\ntime_diff({A2,B2,C2} = _After,{A1,B1,C1} = _Before)-> time_diff({A2,B2,C2} = _After,{A1,B1,C1} = _Before) - >        \n    {A2 - A1,B2 - B1,C2 - C1}. {A2  -  A1,B2  -  B1,C2  -  C1}。\n\n\n

Alright ! 好的 ! That was huge :) So we are done with the subscriber. 那是巨大的:)所以我们完成了订阅者。 We need to put the code that will crown it all together and run the necessary tests. 我们需要将代码全部放在一起并运行必要的测试。

\ninstall()-> 安装() - >\n    mnesia:stop(). Mnesia的:停止()。\n    mnesia:delete_schema([node()]), Mnesia的:delete_schema([节点()]),\n    mnesia:create_schema([node()]), Mnesia的:create_schema([节点()]),\n    mnesia:start(), 函数mnesia:start()方法,\n    {atomic,ok} = mnesia:create_table(key_value,[ {atomic,ok} = mnesia:create_table(key_value,[\n        {attributes,record_info(fields,key_value)}, {属性,record_info(字段,key_value)},\n        {disc_copies,[node()]} {disc_copies,[节点()]} 
\n ]), ]),\n {atomic,ok} = mnesia:create_table(benchmark,[ {atomic,ok} = mnesia:create_table(benchmark,[\n {attributes,record_info(fields,benchmark)}, {属性,record_info(字段,基准)},\n {disc_copies,[node()]} {disc_copies,[节点()]}\n ]), ]),\n mnesia:stop(), Mnesia的:停止()\n ok. 好。
\nstart()-> 开始() - >\n mnesia:start(), 函数mnesia:start()方法,\n ok = mnesia:wait_for_tables([key_value,benchmark],timer:seconds(120)), ok = mnesia:wait_for_tables([key_value,benchmark],timer:seconds(120)),\n %% boot up our subscriber %%启动我们的订阅者\n register(mnesia_subscriber,spawn(?MODULE,subscriber,[])), 寄存器(mnesia_subscriber,产卵(?MODULE,订户,[])),\n start_write_jobs(), start_write_jobs()\n start_key_picker(), start_key_picker()\n start_read_jobs(), start_read_jobs()\n ok. 好。\n

Now, with proper analysis of the benchmark table records, you will get record of average read times, average write times etc You draw a graph of these times against increasing number of processes. 现在,通过对基准表记录的正确分析,您将获得平均读取时间,平均写入时间等的记录。您可以根据不断增加的进程数绘制这些时间的图表。 As we increase the number of processes, you will discover that the read and write times increase . 随着我们增加进程数量,您将发现读取和写入时间增加。 Get the code, read it and make use of it. 获取代码,阅读并使用它。 You may not use all of it but am sure you could pick up new concepts from there as others send in there solutions. 您可能不会全部使用它,但我相信您可以从那里获取新概念,因为其他人会发送解决方案。 Using mnesia events is the best way to test mnesia reads and writes without blocking the processes doing the actual writing or reading. 使用mnesia事件是测试mnesia读写的最佳方法,而不会阻止进行实际写入或读取的进程。 In the example above, the reading and writing processes are out of any control, infact, they will run forever until you terminate the VM. 在上面的示例中,读取和写入过程不受任何控制,事实上,它们将永远运行,直到您终止VM。 You can traverse the benchmark table with a good formulae to make use of the read and write times per read or write instance and then you would calculate averages, variations etc 您可以使用良好的公式遍历基准表,以利用每个读取或写入实例的读取和写入时间,然后计算平均值,变化等




Testing from Remote Computers, Simulating Nodes, benchmarking against other DBMS may not be as relevant simply because of many reasons. 从远程计算机进行测试,模拟节点,针对其他DBMS进行基准测试可能不是因为许多原因而具有相关性。 The concepts, motivations and goals of Mnesia are very different from several types of existing Database Types like: document oriented DBs, RDBMS, Object-Oriented DBs etc Infact, mnesia out to be compared with a Database such as this one . 的概念,动机和Mnesia的目标是从几种现有的数据库类型,如非常不同:面向文档的DB,RDBMS,面向对象的DB等逸岸,Mnesia的拿出来与这样一个数据库进行比较, 这一个 Its a Distributed DBMs with a Hybrid/Unstructured kinda Data Structures which belong to the Language Erlang. 它是一个分布式DBM,具有属于语言Erlang的混合/非结构化数据结构。 Benchmarking Mnesia against another type of Database may not be right because its purpose is very different from many and its tight coupling with Erlang/OTP. 将Mnesia与其他类型的数据库进行基准比较可能不正确,因为它的目的与许多人及其与Erlang / OTP的紧密耦合有很大不同。 However, a knowledge of how mnesia works, transaction contexts, indexing, concurrency, distribution can be key to a good Database Design. 但是,有关mnesia如何工作,事务上下文,索引,并发,分发的知识可以成为良好的数据库设计的关键。 Mnesia can store a very Complex Data Structure. Mnesia可以存储非常复杂的数据结构。 Remember, the more complex a Data Structure is with nested information, the more work required to unpack it and extract the information you need at run-time, which means more CPU Cycles and memory. 请记住,数据结构与嵌套信息越复杂,解压缩和提取运行时所需信息所需的工作就越多,这意味着更多的CPU周期和内存。 Some times, normalization with mnesia may just result in poor performance and so the implementation of its concepts are far away from other Database. 有时,使用mnesia进行标准化可能只会导致性能不佳,因此其概念的实现远离其他数据库。
Its good you are interested in Mnesia performance across several machines (distributed), however, the performance is as good as Distributed Erlang is. 你很高兴你对几台机器(分布式)的Mnesia性能感兴趣,但是,性能和分布式Erlang一样好。 The great thing is that atomicity is ensured for every transaction. 最棒的是每个交易都确保了原子性。 Still concurrent requests from remote nodes can be sent via RPC Calls. 来自远程节点的并发请求仍然可以通过RPC调用发送。 Remember that if you have multiple replicas of mnesia on different machines, processes running on each node will write on that very node, then mnesia will carry on from there with its replication. 请记住,如果在不同的计算机上有多个mnesia副本,则每个节点上运行的进程将在该节点上写入,然后mnesia将从那里继续进行复制。 Mnesia is very fast at replication, unless a network is really doing bad and/or the nodes are not connected, or network is partitioned at runtime. Mnesia复制速度非常快,除非网络确实不好和/或节点没有连接,或者网络在运行时被分区。
Mnesia ensures Consistency and Atomicity of CRUD Operations. Mnesia确保CRUD操作的一致性和原子性。 For this reason, replicated mnesia Databases highly depend on the network availability for better performance. 因此,复制的mnesia数据库高度依赖于网络可用性以获得更好的性能。 As long as the Erlang Nodes remain connected, the two or more Mnesia Nodes will always have the same data. 只要Erlang节点保持连接,两个或多个Mnesia节点将始终具有相同的数据。 Reads on one Node will ensure that you get the most recent information. 在一个节点上读取将确保您获得最新信息。 Problems arise when a disconnection occurs and each node registers thet other as though its down. 当发生断开连接并且每个节点将其注册为另一个节点时,会出现问题。 More information on mnesia's performance can be found by following the following links 有关mnesia性能的更多信息,请访问以下链接

http://igorrs.blogspot.com/2010/05/mnesia-one-year-later.html http://igorrs.blogspot.com/2010/05/mnesia-one-year-later.html
http://igorrs.blogspot.com/2010/05/mnesia-one-year-later-part-2.html http://igorrs.blogspot.com/2010/05/mnesia-one-year-later-part-2.html
http://igorrs.blogspot.com/2010/05/mnesia-one-year-later-part-3.html http://igorrs.blogspot.com/2010/05/mnesia-one-year-later-part-3.html
http://igorrs.blogspot.com/2009/11/consistent-hashing-for-mnesia-fragments.html http://igorrs.blogspot.com/2009/11/consistent-hashing-for-mnesia-fragments.html

As a consequence, the concepts behind mnesia can only be compared with Ericsson's NDB Database found here: 因此,mnesia背后的概念只能与爱立信的NDB数据库进行比较: http://ww.dolphinics.no/papers/abstract/ericsson.html , but not with existing RDBMS, or Document Oriented Databases, etc Those are my thoughts :) lets wait for what others have to say..... http://ww.dolphinics.no/papers/abstract/ericsson.html ,但不是现有的RDBMS,或面向文档的数据库等,这些是我的想法:)让我们等待别人说的话......

You start additional nodes using command like this: 您可以使用以下命令启动其他节点:

erl -name test1@127.0.0.1 -cookie devel \
    -mnesia extra_db_nodes "['devel@127.0.0.1']"\
    -s mnesia start

where 'devel@127.0.0.1' is the node where mnesia is already setup. 其中'devel@127.0.0.1'是已经设置了mnesia的节点。 In this case all tables will be accessed from remote node, but you can make local copies with mnesia:add_table_copy/3 . 在这种情况下,将从远程节点访问所有表,但您可以使用mnesia:add_table_copy/3制作本地副本。

Then you can use spawn/2 or spawn/4 to start load generation on all nodes with something like: 然后你可以使用spawn/2spawn/4在所有节点上开始生成负载,例如:

lists:foreach(fun(N) ->
                  spawn(N, fun () ->
                               %% generate some load
                               ok
                           end
              end,
     [ 'test1@127.0.0.1', 'test2@127.0.0.1' ]
)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM