简体   繁体   English

UDP套接字池会提高数据报传送成功率并提高效率吗?

[英]Will UDP socket pool improve datagram delivery successful rate and be more efficient?

I am developing a UDP client module in Solaris using C, and there are 2 design modules: 我正在使用C在Solaris中开发UDP客户端模块,并且有2个设计模块:

(1) Create a socket, and send all messages through this socket. (1)创建一个套接字,并通过该套接字发送所有消息。 The receive thread only call recvfrom on this socket. 接收线程仅在此套接字上调用recvfrom

(2) Create a group of sockets. (2)创建一组套接字。 When sending message, select a socket randomly from the socket pool. 发送消息时,请从套接字池中随机选择一个套接字。 The receive thread needs to call poll or select on a group of sockets. 接收线程需要调用poll或在一组套接字上进行select

When the throughput is low, I think the first design module is OK. 当吞吐量低时,我认为第一个设计模块就可以了。

If the throughput is high, I am wondering whether the second design module can be better? 如果吞吐量很高,我想知道第二个设计模块是否可以更好? Because it will dispatch messages to a group of sockets, and this maybe improve UDP datagram delivery successful rate and more efficient. 因为它会将消息分派到一组套接字,所以这可能会提高UDP数据报传递成功率并提高效率。

There's still only one network. 仍然只有一个网络。 You can have as many sockets, threads, whatever, as you like. 您可以根据需要拥有任意数量的套接字,线程。 The rate-determining step is the network. 速率确定步骤是网络。 There is no point to this. 这是没有意义的。

The question here primarily depends on how parallel the computer is (number of cores) and how parallel the algorithm is. 这里的问题主要取决于计算机的并行度(内核数)以及算法的并行度。 Most likely your CPU cores are vastly faster than the network connection anyway and even one of them could easily overwhelm the connection. 无论如何,您的CPU内核极有可能比网络连接快得多,甚至其中之一也很容易使连接不堪重负。 Thus on a typical system option (1) will give significantly better performance and lower drop rates. 因此,在典型的系统选项中,选项(1)将提供明显更好的性能和更低的丢包率。

This is because there is a significant overhead to using a UDP port on several threads or processes due to the internal locking the OS has to do to ensure the packets' contents are not multiplexed and corrupted, this causes a significant performance loss and significantly increased chance of packet loss where the kernel gives up waiting for other threads and just throws your pending packets away. 这是因为由于操作系统必须进行内部锁定以确保数据包的内容不被多路复用和损坏,因此在多个线程或进程上使用UDP端口会产生大量开销,这会导致严重的性能损失并大大增加机会丢包的原因是内核放弃等待其他线程而只是将您的未决数据包丢弃。

In the extreme case where your cores are very slow and your connection extremely fast (say a 500 core super computer with a 10 - 100Gbit fibre connection) option two could become more feasible, the locking would be less likely as the connection would be fast enough to keep many cores busy without them tripping over each other and locking often, this will -not- increase reliability (and may slightly decrease it) but might increase throughput depending on your architecture. 在极端情况下,您的核心速度非常慢,并且连接速度非常快(例如具有10-100Gbit光纤连接的500核心超级计算机),选项2变得更可行,因为连接速度足够快,锁定的可能性较小为了使许多内核保持忙碌而又不会彼此跳闸并经常锁定,这不会(但可能会略有降低)可靠性,但可能会提高吞吐量,具体取决于您的体系结构。

Overall in nearly every case I would suggest option 1, but if you really do have an extreme throughput situation you should look into other methods, however if you are writing software for this kind of system you would probably benefit from some more general training in massively parallel systems. 总的来说,在几乎每种情况下,我都会建议选项1,但是如果确实确实存在吞吐量过大的情况,则应该考虑其他方法,但是,如果您正在为这种系统编写软件,则可能会从大量的常规培训中受益并行系统。

I hope this helps, if you have any queries please leave a comment. 希望对您有所帮助,如果您有任何疑问,请发表评论。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM