简体   繁体   中英

Strange behavior with HttpWebRequest in C# console application

I have a simple console application that sends a number of POST requests to a server and prints out the average successful requests per second. The application makes use of the HttpWebRequest class to do the actual message sending. It is multi-threaded and leverages the C# ThreadPool.

I have tested this application on a server that immediately sends a response back and one that actually does some work. In the former scenario, a single instance of my application is able to achieve around 30k messages a second. In the later scenario around 12k messages a second.

The strange behavior I am observing is that if I run multiple instances (2-4) of my application I achieve a greater accumulative throughput than if I were to run a single instance. Thus for the two scenarios described above I achieve an accumulated throughput of 40k and 20k respectively.

What I cannot figure out is why a single instance of my application cannot achieve the 20k throughput in the second scenario if a single instance of my application has been proven to achieve a throughput of 30k in the first scenario.

I have played around with the maxconnection parameter as well as the various threadpool parameters eg minthreads / maxthreads. I have also tried altering the implementation eg Parallel.For / Task.Run / ThreadPool.QueueUserWorkItem. I have even tested an asynchronous paradigm. Unfortunately, all versions exhibit the same behavior.

Any idea what could be going on?

EDIT

The main loop looks as follows:

Task.Run(() =>
            {
                while (true)
                {
                    _throttle.WaitOne();
                    ThreadPool.QueueUserWorkItem(SendStressMessage);
                }
            });

Ripping out the unecessary bits of SendStressMessage, looks like this:

private static void SendStressMessage(Object state)
        {

            var message = _sampleMessages.ElementAt(_random.Next(_sampleMessages.Count));

            if (SendMessage(_stressuri, message))
            {
                Interlocked.Increment(ref _successfulMessages);
                _sucessfulMessageCounter.Increment();
                _QPSCounter.Increment();
            }
            else
            {
                Interlocked.Increment(ref _failedMessages);
                _failedMessageCounter.Increment();
            }
            // Check time of last QPS read out
            if (DateTime.UtcNow.Subtract(_lastPrint).TotalSeconds > _printDelay)
            {
                lock (_lock)
                {
                    // Check one last time while holding the lock
                    if (DateTime.UtcNow.Subtract(_lastPrint).TotalSeconds > _printDelay)
                    {
                        // Print current QPS and update last print / successful message count
                        Console.WriteLine("Current QPS: " + (int)((Thread.VolatileRead(ref _successfulMessages) - _lastSuccessfulMessages) / DateTime.UtcNow.Subtract(_lastPrint).TotalSeconds));
                        _lastSuccessfulMessages = _successfulMessages;
                        _lastPrint = DateTime.UtcNow;
                    }
                }
            }

            _throttle.Release();
        }

Finally, the sendmessage method looks as follows:

private static bool SendMessage(Uri uri, byte[] message)
    {

        HttpWebRequest request = null;
        HttpWebResponse response = null;

        try
        {
            request = WebRequest.CreateHttp(uri);
            request.Method = "POST";
            request.KeepAlive = true;
            request.Proxy = null;
            request.ContentLength = message.Length;

            // Write to request body
            using (Stream requestStream = request.GetRequestStream())
            {
                requestStream.Write(message, 0, message.Length);
            }

            // Try posting message
            response = (HttpWebResponse)request.GetResponse();

            // Check response status and increment accordingly
            if (response.StatusCode == HttpStatusCode.OK)
            {
                return true;
            }
            else
            {
                return false;
            }
        }
        catch
        {
            return false;
        }
        finally
        {
            // Dispose of response
            if (response != null)
            {
                response.Close();
                response.Dispose();
            }
        }
    }

If you notice above, there is a lock. I have tried removing the lock and this does not affect performance.

UPDATE

As stated above, I initially experienced poor performance when I rewrote the application to be asynchronous. I had not realized that the implementation was incorrect as I was in fact blocking on an IO call. As pointed out by @David d C e Freitas, there is an async version of GetRequestStream which I was not utilizing. Finally, my initial implementation did not throttle the number of pending messages. This greatly reduced performance as too many async handles were being created.

An outline of the final solution which does not exhibit the issues described above follows:

The main loop:

Task.Run(async () =>
        {
            while (true)
            {
                await _throttle.WaitAsync();
                SendStressMessageAsync();
            }
        });

A stripped down version of SendStressMessageAsync() follows:

private static async void SendStressMessageAsync()
    {

        var message = _sampleMessages.ElementAt(_random.Next(_sampleMessages.Count));

        if (await SendMessageAsync(_stressuri, message))
        {
            Interlocked.Increment(ref _successfulMessages);
            _sucessfulMessageCounter.Increment();
            _QPSCounter.Increment();
        }
        else
        {
            //Failed request - increment failed count
            Interlocked.Increment(ref _failedMessages);
            _failedMessageCounter.Increment();
        }
        // Check time of last QPS read out
        if (DateTime.UtcNow.Subtract(_lastPrint).TotalSeconds > _printDelay)
        {
            await _printlock.WaitAsync();
            // Check one last time while holding the lock
            if (DateTime.UtcNow.Subtract(_lastPrint).TotalSeconds > _printDelay)
            {
                // Print current QPS and update last print / successful message count
                Console.WriteLine("Current QPS: " + (int)(Interlocked.Read(ref _successfulMessages) / DateTime.UtcNow.Subtract(_startTime).TotalSeconds));
                _lastPrint = DateTime.UtcNow;
            }
            _printlock.Release();
        }

        _throttle.Release();
    }

Finally the async send message looks as follows:

private static async Task<bool> SendMessageAsync(Uri uri, byte[] message)
    {

        HttpWebRequest request = null;
        HttpWebResponse response = null;

        try
        {
            // Create POST request to provided VIP / Port
            request = WebRequest.CreateHttp(uri);
            request.Method = "POST";
            request.KeepAlive = true;
            request.Proxy = null;
            request.ContentLength = message.Length;

            // Write to request body
            using (Stream requestStream = await request.GetRequestStreamAsync())
            {
                requestStream.Write(message, 0, message.Length);
            }

            // Try posting message
            response = (HttpWebResponse)await request.GetResponseAsync();

            // Check response status and increment accordingly
            if (response.StatusCode == HttpStatusCode.OK)
            {
                return true;
            }
            else
            {
                return false;
            }
        }
        catch
        {
            //Failed request - increment failed count
            return false;
        }
        finally
        {
            // Dispose of response
            if (response != null)
            {
                response.Close();
                response.Dispose();
            }
        }
    }

My best guess is you are simply not using your resources enough, until you run several instances. Using parallel or async doesn't magically speed up your application, you need to place them at the right place where there's a bottleneck.

The specific problem is you probably used parallel, but you needed a bigger degree of parallelism because the nature of your operation isn't CPU-Intensive but I/O-Intensive. You can use both parallel and async together to fully utilize your resources.

That's all I can do without any specific code.

The .NET framework limits the number of connections made to a particular host, using an application configuration item maxconnections . By default this limit is 2 connections per host.

What this means is that with the default settings only 2 requests can be active at any time. If you initiate 10 requests at once only 2 of them will connect immediately. The rest will spin until one of the available slots becomes free.

Try modifying your App.config to include the following:

<configuration>
    <system.net>
        <connectionManagement>
            <add address = "*" maxconnection = "10" />
        </connectionManagement>
    </system.net>
</configuration>

This will allow 10 connections per host, which should greatly improve your throughput.

It seems your question is: Why does a request that does nothing return up to 30kmsg/s and a request that does some work returns 12kmsg/s?

Can you see how many requests are in a waiting state (in-flight) on your test application? ie how many have opened the socket and fully written the request out but haven't started receiving some data back. Perhaps your test application can only handle so many "in-progress" requests at a time before it slows down the pending queue.

So this question is about the server and not your test client application, as the server is what is determining how many requests it can handle per second more likely than your test application. What is the server written in, what have you tweaked there?

Update:

One thing to consider is that there are many synchronous operations happening behind the scenes that you aren't handling:

  1. Sync DNS lookup for the address. (Try using the IP)
  2. The TCP connection setup/startup time (opening a connection).
    • GetRequestStream can be done async, ie BeginGetRequestStream
    • GetResponse can be done async, ie BeginGetResponse or GetResponseAsync

Blocking on any of these actions will limit your throughput in the single threaded instance.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM