简体   繁体   English

Java TCP套接字字节堆内存问题

[英]Java TCP Socket Byte Heap Memory Issue

I have a Java TCP Server Socket program that is expecting about 64 bytes of data from a piece of remote hardware. 我有一个Java TCP Server Socket程序,期望从一个远程硬件中获得大约64字节的数据。 The Server code is: 服务器代码为:

public void run () throws Exception
{


    //Open a socket on localhost at port 11111

    ServerSocket welcomeSocket = new ServerSocket(11111);

    while(true) {

        //Open and Accept on Socket

        Socket connectionSocket = welcomeSocket.accept();
        DataInputStream dIn = new DataInputStream(connectionSocket.getInputStream());


        int msgLen = dIn.readInt();
        System.out.println("RX Reported Length: "+ msgLen);
        byte[] msg = new byte[msgLen];

        if(msgLen > 0 ) {
            dIn.readFully(msg);

            System.out.println("Message Length: "+ msg.length);
            System.out.println("Recv[HEX]: " + StringTools.toHexString(msg));
        }
    }
}

This works correctly as I am able to test locally with a simple ACK program: 这可以正常工作,因为我可以使用简单的ACK程序在本地进行测试:

public class ACK_TEST { 

    public static void main (String[] args)
    {

        System.out.println("Byte Sender Running");


        try
        {   
            ACK_TEST obj = new ACK_TEST ();
            obj.run();
        }
        catch (Exception e)
        {
            e.printStackTrace ();
        }


    }


    public void run () throws Exception
    {

        Socket clientSocket = new Socket("localhost", 11111); 
        DataOutputStream dOut = new DataOutputStream(clientSocket.getOutputStream());

        byte rtn[] = null;
        rtn = new byte[1];
        rtn[0] = 0x06; // ACK


        dOut.writeInt(rtn.length); // write length of the message
        dOut.write(rtn);           // write the message

        System.out.println("Byte Sent");
        clientSocket.close();
    }
}

And this correctly produces this output from the Server side: 这样可以正确地从服务器端产生以下输出:

在此处输入图片说明

However, when I deploy the same Server code on the Raspberry Pi and the hardware sends data to it, the data length is far greater and causes a heap memory issue (Even with the Heap pre-set at 512MB, which is definitely incorrect and unnecessary) 但是,当我在Raspberry Pi上部署相同的服务器代码并且硬件向其发送数据时,数据长度会大大增加,并且会导致堆内存问题(即使Heap预先设置为512MB,这绝对是不正确和不必要的) )

在此处输入图片说明

My presumption is I am reading the data wrong from the TCP socket as from the debug from the hardware, it's certainly not sending packets of this size. 我的假设是我从TCP套接字读取了错误的数据,就像从硬件进行调试一样,它肯定不是在发送这种大小的数据包。

Update: I have no access to the Client source code. 更新:我无权访问客户端源代码。 I do however need to take the input TCP data stream, place it into a byte array, and then another function (Not shown) parses out some known HEX codes. 但是,我确实需要将输入的TCP数据流放入一个字节数组中,然后另一个函数(未显示)解析出一些已知的HEX代码。 That function expects a byte array input. 该函数需要一个字节数组输入。

Update: I reviewed the packet documentation. 更新:我查看了数据包文档。 It is a 10 byte header. 它是一个10字节的标头。 The first Byte is a protocol identifier. 第一个字节是协议标识符。 The next 2 bytes is the Packet Length (Total number of bytes in the packet, including all the header bytes and checksum) and the last 7 are a Unique ID. 接下来的2个字节是数据包长度(数据包中的字节总数,包括所有标头字节和校验和),最后7个字节是唯一ID。 Therefore, I need to read those 2 bytes and create a byte array that size. 因此,我需要读取这2个字节并创建一个具有该大小的字节数组。

Apparently the length from the header is about 1GB. 显然,从标头开始的长度约为1GB。 Looks like the problem on the other end. 看起来在另一端的问题。 Don't you mix low/big endian encoding? 您不混合低/大端编码吗?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM