简体   繁体   中英

Java Multithreaded Web Server - Not recieving multiple GET requests

I have the starts of a very basic multi-hreaded web server, it can recieve all GET requests as long as they come one at a time.

However, when multiple GET requests come in at the same time, sometimes they all are recieved, and other times, some are missing.

I tested this by creating a html page with multiple image tags pointing to my webserver and opening the page in firefox. I always use shift+refresh.

Here is my code, I must be doing something fundamentally wrong.

public final class WebServer
{
    public static void main(String argv[]) throws Exception
    {
        int port = 6789;

        ServerSocket serverSocket = null;
        try
        {
            serverSocket = new ServerSocket(port);
        }
        catch(IOException e)
        {
            System.err.println("Could not listen on port: " + port);
            System.exit(1);
        }

        while(true)
        {
            try
            {
                Socket clientSocket = serverSocket.accept();
                new Thread(new ServerThread(clientSocket)).start();
            }
            catch(IOException e)
            {

            }
        }
    }
}

public class ServerThread implements Runnable
{
    static Socket clientSocket = null;

    public ServerThread(Socket clientSocket)
    {
        this.clientSocket = clientSocket;
    }

    public void run()
    {
        String headerline = null;
        DataOutputStream out = null;
        BufferedReader in = null;

        int i;

        try
        {
            out = new DataOutputStream(clientSocket.getOutputStream());
            in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));

            while((headerline = in.readLine()).length() != 0)
            {
                System.out.println(headerline);
            }
        }
        catch(Exception e)
        {

        }
}

First, @skaffman's comment is spot on. You should not catch-and-ignore exceptions like your code is currently doing. In general, it is a terrible practice. In this case, you could well be throwing away the evidence that would tell you what the real problem is.

Second, I think you might be suffering from a misapprehension of what a server is capable of. No matter how you implement it, a server can only handle a certain number of requests per second. If you throw more requests at it than that, some have to be dropped.


What I suspect is happening is that you are sending too many requests in a short period of time, and overwhelming the operating system's request buffer.

When your code binds to a server socket, the operating system sets up a request queue to hold incoming requests on the bound IP address/port. This queue has a finite size, and if the queue is full when a new request comes, the operating system will drop requests. This means that if your application is not able to accept requests fast enough, some will be dropped.

What can you do about it?

  • There is an overload of ServerSocket.bind(...) that allows you to specify the backlog of requests to be held in the OS-level queue. You could use this ... or use a larger backlog.
  • You could change your main loop to pull requests from the queue faster. One issue with your current code is that you are creating a new Thread for each request. Thread creation is expensive, and you can reduce the cost by using a thread pool to recycle threads used for previous requests.

CAVEATS

You need to be a bit careful. It is highly likely that you can modify your application to accept (not drop) more requests in the short term. But in the long term, you should only accept requests as fast as you can actually process them. If it accepts them faster than you can process them, a number of bad things can happen:

  • You will use a lot of memory with all of the threads trying to process requests. This will increase CPU overheads in various ways.
  • You may increase contention for internal Java data structures, databases and so on, tending to reduce throughput.
  • You will increase the time taken to process and reply to individual GET requests. If the delay is too long, the client may timeout the request ... and send it again. If this happens, the work done by the server will be wasted.

To defend yourself against this, it is actually best to NOT eagerly accept as many requests as you can. Instead, use a bounded thread pool, and tune the pool size (etc) to optimize the throughput rate while keeping the time to process individual requests within reasonable limits.

I actually discovered the problem was this:

  static Socket clientSocket = null;

Once I removed the static, it works perfectly now.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM