简体   繁体   English

DatagramSocket暂时停止接收数据包(Java)

[英]DatagramSocket temporarily stops receiving packets (Java)

I have programmed a plugin in Lua for a game that sends player information via a UDP packet (512 bytes) to a remote server that reads the data from the packet and aggregates all player information into an xml file (which can then be viewed on the web by all players so they can see eachother's current state). 我已经在Lua中为游戏编程了一个插件,该插件通过UDP数据包(512字节)将玩家信息发送到远程服务器,该服务器从数据包中读取数据并将所有玩家信息汇总为xml文件(然后可以在xml文件中查看)网络,以便他们可以看到彼此的当前状态)。

I have programmed the server in Java using a DatagramSocket to handle the incoming packets, however I noticed some strange behavior. 我已经使用DatagramSocket用Java对服务器进行了编程,以处理传入的数据包,但是我注意到了一些奇怪的行为。 After a certain period of time, the DatagramSocket appears to temporarily stop accepting connections for about 10-12 seconds, then resumes normal behavior again (no exceptions are thrown that I can see). 一段时间后,DatagramSocket似乎暂时停止接受连接大约10-12秒,然后再次恢复正常行为(我看不到任何异常抛出)。 There is definitely a relationship between how often packets are sent by the clients and how quickly this behavior occurs. 客户端发送数据包的频率与此行为发生的速度之间肯定存在关系。 If I increase the update frequency of the clients, the DatagramSocket will "fail" sooner. 如果我增加客户端的更新频率,DatagramSocket将更快“失败”。

It may be worth mentioning, but each packet received spawns a thread which handles the data in the packet. 可能值得一提,但是收到的每个数据包都会产生一个线程来处理该数据包中的数据。 I am running the server on linux if it makes a difference! 我在Linux上运行服务器,如果有所作为!

Does anyone know what could be causing this sort of behavior to occur? 有谁知道是什么原因导致这种行为发生?

Andrew 安德鲁

UDP is a network protocol with absolutely no delivery guarantee. UDP是一种绝对没有交付保证的网络协议。 Any network component anywhere along the way (including the client and server PC itself) can decide drop the packets for any reason, such as high load or network congestion. 沿途任何地方的任何网络组件(包括客户端和服务器PC本身)都可以出于任何原因(例如高负载或网络拥塞)决定丢弃数据包。

This means you'll have to spelunk a bit to find out where the packet loss is happening. 这意味着您必须花点时间找出丢包发生的位置。 You can use something like wireshark to see whether packets are arriving at the server at all. 您可以使用wireshark之​​类的工具查看数据包是否完全到达服务器。

If reliable delivery is more important than lower latency, switch to TCP. 如果可靠的传送比降低延迟更重要,请切换到TCP。 If you stick to UDP you'll have to allow for packets getting lost, regardless of whether you fix this particular issue at this particular time. 如果您坚持使用UDP,则无论您是否在此特定时间解决此特定问题,都必须允许数据包丢失。

My conjecture would be that you're running out of receive buffer space on the server end. 我的推测是您的服务器端接收缓冲区空间不足。

You might want to revisit your design: spawning a thread is a pretty expensive operation. 您可能需要重新设计:生成线程是一项非常昂贵的操作。 Doing so for every incoming packet would lead to a system with relatively low throughput, which could easily explain why the receive queue is building up. 对每个传入的数据包执行此操作将导致系统具有相对较低的吞吐量,这可以轻松解释为什么接收队列不断增加。

Also, see Specifying UDP receive buffer size at runtime in Linux 另请参阅在Linux中在运行时指定UDP接收缓冲区大小

PS I am sure you already know that UDP does not guarantee message delivery, so I won't labour the point. PS,我确定您已经知道UDP不保证消息传递,因此我不会为此而努力。

Starting a thread for each UDP packet is a Bad Idea TM . 为每个UDP数据包启动线程是Bad Idea TM UDP servers are traditionally coded as simple receive-loops (after all you only need one socket). UDP服务器传统上被编码为简单的接收循环(毕竟,您只需要一个套接字)。 This way you avoid all the overhead of threads, synchronization, and what not. 这样,您可以避免线程,同步等所有开销。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM