简体   繁体   中英

Twisted Throughput Limit Decreases

I am developing a program which allows for simulations of networks on a single machine. For this I am using Twisted for asynchronous I/O, as having a thread for each 'connection' might be a bit much. (I have also implemented a similar program in Java using their NIO). However as I scale the emulated network size up the throughput on Twisted decreases. When comparing this to the Java implementation, for the same network size, the Java throughput continues to grow. (The growth rate slows down, but it's still an increase). Eg (Python 100 nodes = 58MB Total throughput, 300 nodes = 45MB, Java 100 nodes = 24 MB, 300 nodes = 56MB).

I am wondering if anyone has any suggestion on why this might be happening?

The only reason that I can think of is that the Java one has each 'peer' running in its own thread (which contains its own selector that monitors that peers connections). In the python version everything is registered with the reactor (and subsequently the one selector). As the python one scales up the one selector is not able to respond as fast. However this is just a guess, if anyone has any more concreate information it would be appriciated.

EDIT: I ran some testing as suggested by Jean-Paul Calderone, the results are posted at imgur . For those who might be wondering the following Avg throughput was reported for the tests. (The profiling was done with cProfile, tests were run for 60 seconds)

Epoll Reactor: 100 Peers: 20.34 MB, 200 Peers: 18.84 MB, 300 Peers: 17.4 MB

Select Reactor: 100 Peers: 18.86 MB, 200 Peers: 19.08 MB, 300 Peers: 16.732 MB

A couple things that seemed to go up and down with the reported throughput was the calls made to main.py:48(send), but this corrolation is not really a surprise as this is where the data is being sent.

For both of the reactor the time spent in the send function on the socket(s) increased as throughput decreased, as well as the number of calls to the send function decreased as throughput decreased. (That is: more time was spent sending on the socket, with less calls to send on a socket.) Eg 2.5 sec for epoll {method 'send' of '_socket.socket' objects} on 100 peers for 413600 calls, to 5.5 sec for epoll on 300 peers, for 354300 calls.

So as to try to answer the original question, does this data seem to point to the selector being a limiting factor? The time spent in the Selector seems to decrease as the number of peers increases ( if the selector was slowing everything down wouldn't one expect the time spent inside to rise?) Is there anything else that might be slowing the amount of data being sent? (The sending of the data is just one function for each peer that is registered with reactor.calllater again and again. That is the main.py:49 (send))

Try profiling the application at different levels of concurrency and see which things get slower as you add more connections.

select is a likely candidate; if you find that it is using noticably more and more time as you add connections, try using the poll or epoll reactor .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM