简体   繁体   中英

TCP Server high CPU usage

C# Visual Studio 2013

I'm working on a rough TCP Server/Client. It works like this:
Client sends message to server > Server sends "response" to client. I have this in a loop as I'm going to use this transfer of data for multiplayer in a game. However, I ran a performance test because my TCP Server was using a lot of my CPU when more than three clients connected. The performance profiler said the following method was responsible for 96% utilization. Can you help me fix this?

private static void ReceiveCallback(IAsyncResult AR)
    {
        Socket current = (Socket)AR.AsyncState;
        int received;

        try
        {
            received = current.EndReceive(AR);
        }
        catch (SocketException)
        {
            Console.WriteLine("Client forcefully disconnected");
            current.Close(); // Dont shutdown because the socket may be disposed and its disconnected anyway
            _clientSockets.Remove(current);
            return;
        }

        byte[] recBuf = new byte[received];
        Array.Copy(_buffer, recBuf, received);
        string text = Encoding.ASCII.GetString(recBuf);
        Console.WriteLine("Received Text: " + text);


        string msg = "Response!";
        byte[] data = Encoding.ASCII.GetBytes(msg);
        current.Send(data);


        current.BeginReceive(_buffer, 0, _BUFFER_SIZE, SocketFlags.None, ReceiveCallback, current);
    }

Just in case, here's the AcceptCallback method which calls the ReceiveCallback.

private static void AcceptCallback(IAsyncResult AR)
    {
        Socket socket;

        try
        {
            socket = _serverSocket.EndAccept(AR);
        }
        catch (ObjectDisposedException) // I cannot seem to avoid this (on exit when properly closing sockets)
        {
            return;
        }

        _clientSockets.Add(socket);
        socket.BeginReceive(_buffer, 0, _BUFFER_SIZE, SocketFlags.None, ReceiveCallback, socket);
        Console.WriteLine("Client connected...");
        _serverSocket.BeginAccept(AcceptCallback, null);
    } 

In the comments you say that your code sends data as fast as CPU and network allow but you want to throttle it. You probably should think about what the optimal frequency is that you want to send at. Then, send at that frequency.

var delay = TimeSpan.FromMilliseconds(50);
while (true) {
 await Task.Delay(delay);
 await SendMessageAsync(mySocket, someData);
 await ReceiveReplyAsync(mySocket);
}

Note, that I have made use of await to untangle the callback mess. If you now add timers or delays into the mix callbacks can get unwieldy. You can do it any way you like, though. Or, you simply use synchronous socket IO on a background thread/task. That is even simpler and the preferred way if there aren't too many threads. Note, that MSDN usually uses the APM pattern with sockets for no good reason.

Note, that Thread.Sleep/Task.Delay are totally fine to use if you want to wait based on time.

Why are you doing this :

byte[] recBuf = new byte[received];
Array.Copy(_buffer, recBuf, received);
string text = Encoding.ASCII.GetString(recBuf);

You have a copy operation that copies your received buffer in recBuf, and then another one to create the string. You can avoid one and your performance will improve. But the fact that cpu is high is normal because even tranforming to the string will make use of the cpu.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM