简体   繁体   English

TCP-IP,C#客户端和Java Server,非常高的延迟

[英]TCP-IP, C# Client and Java Server, very high latency

In some tutoral-based codes, I connected a C# web application to a Java socket server through my web application's WebMethod in a webservice. 在一些基于教程的代码中,我通过Web服务中Web应用程序的WebMethod将C#Web应用程序连接到Java套接字服务器。 Unfortunately this is happening pretty slowly. 不幸的是,这发生的非常缓慢。 For example, when the Java server echoes some data to the C# client I get the following results: 例如,当Java服务器将某些数据回显到C#客户端时,我得到以下结果:

  • Size of data sent= 32MB, total time= 980 ms (no problem) 发送的数据大小= 32MB,总时间= 980 ms(没有问题)
  • Size of data sent= 4MB, total time= 530 ms (becomes somewhat slower) 发送的数据大小= 4MB,总时间= 530毫秒(速度稍慢)
  • Size of data sent= 1MB, total time= 520 ms (absolutely bottlenecked) 发送的数据大小= 1MB,总时间= 520 ms(绝对瓶颈)
  • Size of data sent= 1kB, total time= 516 ms (this must be some constant latency of something) 发送的数据大小= 1kB,总时间= 516毫秒(这必须是某些恒定的延迟)

I've read that people can make real-time communications (~60/s) and sometimes even millions of streams/s with some server apps. 我读过人们可以使用某些服务器应用程序进行实时通信(〜60 / s),有时甚至是数百万个流/秒。 What could be the problem with my implementation? 我的实现可能是什么问题? It is sending multiple messages over a single open connection, so the object creation overhead should only show up for the first message? 它正在通过一个打开的连接发送多个消息,因此对象创建开销应该只显示第一个消息吗? Why am I getting ~500 ms overhead on my messaging? 为什么我的消息传递会有500毫秒的开销?

The C# webmethod is initiated when the web-app starts and connects to the same Java server for every call to this webmethod. 当Web应用启动并针对该Web方法的每次调用连接到同一Java服务器时,将启动C#Web方法。

public static IPHostEntry ipHostInfo = Dns.Resolve(Dns.GetHostName());
public static IPAddress ipAddress = ipHostInfo.AddressList[0];
public static IPEndPoint remoteEP = new IPEndPoint(ipAddress, 9999);

// Create a TCP/IP  socket.
public static Socket sender = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
public static int z = 0; 

[WebMethod]
public BenchmarkData_ StartClient()
{
    lock(lck)
    {
        z++;
        if (z == 1)
        {
            sender.Connect(remoteEP);
        }
    }
    int bytesRec = 0;
    int boy = 0;
    byte[] bytes = new byte[1024 * 1024];
    int bytesSent = 0;
    SocketFlags sf = new SocketFlags();
    Stopwatch sw = new Stopwatch(); Stopwatch sw2 = new Stopwatch();

    #region r
    lock (lck)
    {
        sw.Start();
        // Data buffer for incoming data.

        // Connect to a remote device.
        try
        {
            // Establish the remote endpoint for the socket.
            // This example uses port 11000 on the local computer.

            // Create a TCP/IP  socket.
            sender.ReceiveBufferSize = 1024 * 1024;
            sender.ReceiveTimeout = 1;

            // Connect the socket to the remote endpoint. Catch any errors.
            try
            {
                Console.WriteLine("Socket connected to {0}", sender.RemoteEndPoint.ToString());
                // Encode the data string into a byte array.
                byte[] msg = Encoding.ASCII.GetBytes("This is a test<EOF>");

                // Send the data through the socket.
                bytesSent = sender.Send(msg);

                // Receive the response from the remote device.
                sw.Stop();

                sw2.Start();
                while ((bytesRec = sender.Receive(bytes)) > 0)
                {
                    boy += bytesRec;
                }

                Console.WriteLine("Echoed test = {0}", Encoding.ASCII.GetString(bytes, 0, bytesRec));

                // Release the socket.
                // sender.Shutdown(SocketShutdown.Both);
                // sender.Close();
                sw2.Stop();
            }
            catch (ArgumentNullException ane)
            {
                Console.WriteLine("ArgumentNullException : {0}", ane.ToString());
            }
            catch (SocketException se)
            {
                Console.WriteLine("SocketException : {0}", se.ToString());
            }
            catch (Exception e)
            {
                Console.WriteLine("Unexpected exception : {0}", e.ToString());
            }
        }
        catch (Exception e)
        {
            Console.WriteLine(e.ToString());
        }
    }
    #endregion

    return new BenchmarkData_() { .... };
}

Here is the Java code (half-pseudo code) 这是Java代码(半伪代码)

serverSocket=new ServerSocket(port); // in listener thread
Socket socket=serverSocket.accept(); // in listener thread

// in a dedicated thread per connection made:
out=new  BufferedOutputStream( socket.getOutputStream());
in=new DataInputStream(socket.getInputStream());        

boolean reading=true;
ArrayList<Byte> incoming=new ArrayList<Byte>();

while (in.available() == 0)
{
    Thread.sleep(3);    
}

while (in.available() > 0)
{
    int bayt=-2;
    try {
        bayt=in.read();
    } catch (IOException e) { e.printStackTrace(); }

    if (bayt == -1)
    {
        reading = false;
    }
    else
    {
        incoming.add((byte) bayt);                      
    }
}

byte [] incomingBuf=new byte[incoming.size()];
for(int i = 0; i < incomingBuf.length; i++)
{
    incomingBuf[i] = incoming.get(i);
}

msg = new String(incomingBuf, StandardCharsets.UTF_8);
if (msg.length() < 8192)
    System.out.println("Socket Thread:  "+msg);
else
    System.out.println("Socket Thread: long msg.");

OutputStreamWriter outW = new OutputStreamWriter(out);
System.out.println(socket.getReceiveBufferSize());
outW.write(testStr.toString()); // 32MB, 4MB, ... 1kB versions
outW.flush();

Problem solved in replacement of 解决替换问题

while ((bytesRec = sender.Receive(bytes))>0)
{
   boy += bytesRec;
}

with

 while (sender.Available <= 0) ;

 while (sender.Available>0)
 {
      bytesRec = sender.Receive(bytes);
      boy += bytesRec;
 }

now its in microseconds for 1kB reads instead of 500ms. 现在读取1kB的数据以微秒为单位,而不是500ms。 Because its checking a single integer instead of trying to read on whole buffer? 因为它检查单个整数而不是尝试读取整个缓冲区? Maybe. 也许。 But it now doesnt read all the message sent from server. 但是现在它不读取服务器发送的所有消息。 It needs some type of header to know how much to read on. 它需要某种类型的标题才能知道需要读多少内容。 Reads about several kilobytes even when server sends megabytes. 即使服务器发送兆字节,也可以读取约几千字节。

When server sends 3MB and client reads exactly same amount, it takes 30ms. 当服务器发送3MB并且客户端读取完全相同的数量时,它需要30ms。 Both in same machine. 两者都在同一台机器上。 Trying to read more than server has sent, (even a single byte), raises an exception so TCP really sends exact same amount needed by client. 尝试读取超过服务器已发送的数据(甚至是单个字节)会引发异常,因此TCP确实会发送与客户端完全相同的数量。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM