简体   繁体   中英

Java NIO UDP Multicast - dropped packets

We have a logging system that uses UDP multicast to send log events. The event rate is around 10,000 events/sec and the average event size is around 2Kb.

The NIO version of the application loses a minor percentage of events (~2000 events in about 12M) during every test. Does anyone have any insights in this regard ?

Sample Code: Without NIO:

    byte[] buf = new byte[65535];
    DatagramPacket packet = new DatagramPacket(buf, buf.length);

    try {
        while (!Thread.currentThread().isInterrupted()) {

            socket.receive(packet);

            final byte[] tmpBuffer = new byte[packet.getLength()];
            System.arraycopy(packet.getData(), 0, tmpBuffer, 0,
                    tmpBuffer.length);

            insertToNonBlockingQueue(tmpBuffer, packet.getSocketAddress());
        }
    } catch (Throwable t) {
        throw new RuntimeException("Encountered exception in Acceptor", t);
    } finally {
        Util.closeQuietly(socket);
    }

With NIO:

    ByteBuffer inBuffer = ByteBuffer.allocate(65535);
    try {
        while (!Thread.currentThread().isInterrupted()) {

            SocketAddress addr = channel.receive(inBuffer);

            inBuffer.flip();

            final byte[] tmpBuffer = new byte[inBuffer.limit()];
            inBuffer.get(tmpBuffer);

            inBuffer.clear();

            insertToNonBlockingQueue(tmpBuffer, addr);
        }
    } catch (ClosedByInterruptException ex) {
        log.info("Channel closed by interrupt"); // normal shutdown
    } catch (Throwable t) {
        throw new RuntimeException("Encountered exception in Acceptor", t);
    } finally {
        Util.closeQuietly(channel);
    }

Both the listeners are run at the same time and every time the non NIO version captures all the log events while the NIO version misses a few. It is not a network issues because, it is the same behaviour even when we switch the code to the other version on a machine.

You forgot to compact() or clear() the buffer after the get() . This code will start dropping packets as soon as the buffer fills.

The DatagramPacket case should reset the packet length before every receive.

It would be simpler to insert the actual DatagramPacket into the queue and use a new one per receive, or synthesise a new one in the NIO case. That way you don't need a new data structure.

In addition to what EJP said, you should use a direct byte buffer as read buffer, otherwise the socket will internally allocate a DBB, then copy from that into your BB and then you'll copy from that into the array. Ie there's a superfluous copy operation.

Additionally you may want to configure the socket's receive buffer to a size that can hold multiple packets.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM