简体   繁体   中英

Grizzly Http Server - accepting only one connection at a time

I have a Grizzly Http Server with Async processing added. It is queuing my requests and processing only one request at a time, despite adding async support to it.

Path HttpHandler was bound to is: "/" Port number: 7777

Behavior observed when I hit http://localhost:7777 from two browsers simultaneously is: Second call waits till first one is completed. I want my second http call also to work simultaneously in tandom with first http call.

Github link of my project 我的项目的Github链接

Here are the classes

package com.grizzly;

import java.io.IOException;
import java.net.URI;

import javax.ws.rs.core.UriBuilder;

import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.nio.transport.TCPNIOTransport;
import org.glassfish.grizzly.strategies.WorkerThreadIOStrategy;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;

import com.grizzly.http.IHttpHandler;
import com.grizzly.http.IHttpServerFactory;

public class GrizzlyMain {

    private static HttpServer httpServer;

    private static void startHttpServer(int port) throws IOException {
        URI uri = getBaseURI(port);

        httpServer = IHttpServerFactory.createHttpServer(uri,
            new IHttpHandler(null));

        TCPNIOTransport transport = getListener(httpServer).getTransport();

        ThreadPoolConfig config = ThreadPoolConfig.defaultConfig()
                .setPoolName("worker-thread-").setCorePoolSize(6).setMaxPoolSize(6)
                .setQueueLimit(-1)/* same as default */;

        transport.configureBlocking(false);
        transport.setSelectorRunnersCount(3);
        transport.setWorkerThreadPoolConfig(config);
        transport.setIOStrategy(WorkerThreadIOStrategy.getInstance());
        transport.setTcpNoDelay(true);

        System.out.println("Blocking Transport(T/F): " + transport.isBlocking());
        System.out.println("Num SelectorRunners: "
            + transport.getSelectorRunnersCount());
        System.out.println("Num WorkerThreads: "
            + transport.getWorkerThreadPoolConfig().getCorePoolSize());

        httpServer.start();
        System.out.println("Server Started @" + uri.toString());
    }

    public static void main(String[] args) throws InterruptedException,
        IOException, InstantiationException, IllegalAccessException,
        ClassNotFoundException {
        startHttpServer(7777);

        System.out.println("Press any key to stop the server...");
        System.in.read();
    }

    private static NetworkListener getListener(HttpServer httpServer) {
        return httpServer.getListeners().iterator().next();
    }

    private static URI getBaseURI(int port) {
        return UriBuilder.fromUri("https://0.0.0.0/").port(port).build();
    }

}

package com.grizzly.http;

import java.io.IOException;
import java.util.Date;
import java.util.concurrent.ExecutorService;

import javax.ws.rs.core.Application;

import org.glassfish.grizzly.http.server.HttpHandler;
import org.glassfish.grizzly.http.server.Request;
import org.glassfish.grizzly.http.server.Response;
import org.glassfish.grizzly.http.util.HttpStatus;
import org.glassfish.grizzly.threadpool.GrizzlyExecutorService;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;
import org.glassfish.jersey.server.ApplicationHandler;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.server.spi.Container;

import com.grizzly.Utils;

/**
 * Jersey {@code Container} implementation based on Grizzly
 * {@link org.glassfish.grizzly.http.server.HttpHandler}.
 *
 * @author Jakub Podlesak (jakub.podlesak at oracle.com)
 * @author Libor Kramolis (libor.kramolis at oracle.com)
 * @author Marek Potociar (marek.potociar at oracle.com)
 */
public final class IHttpHandler extends HttpHandler implements Container {

    private static int reqNum = 0;

    final ExecutorService executorService = GrizzlyExecutorService
            .createInstance(ThreadPoolConfig.defaultConfig().copy()
                    .setCorePoolSize(4).setMaxPoolSize(4));

    private volatile ApplicationHandler appHandler;

    /**
     * Create a new Grizzly HTTP container.
     *
     * @param application
     *          JAX-RS / Jersey application to be deployed on Grizzly HTTP
     *          container.
     */
    public IHttpHandler(final Application application) {
    }

    @Override
    public void start() {
        super.start();
    }

    @Override
    public void service(final Request request, final Response response) {
        System.out.println("\nREQ_ID: " + reqNum++);
        System.out.println("THREAD_ID: " + Utils.getThreadName());

        response.suspend();
        // Instruct Grizzly to not flush response, once we exit service(...) method

        executorService.execute(new Runnable() {
            @Override
            public void run() {
                try {
                    System.out.println("Executor Service Current THREAD_ID: "
                            + Utils.getThreadName());
                    Thread.sleep(25 * 1000);
                } catch (Exception e) {
                    response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR_500);
                } finally {
                    String content = updateResponse(response);
                    System.out.println("Response resumed > " + content);
                    response.resume();
                }
            }
        });
    }

    @Override
    public ApplicationHandler getApplicationHandler() {
        return appHandler;
    }

    @Override
    public void destroy() {
        super.destroy();
        appHandler = null;
    }

    // Auto-generated stuff
    @Override
    public ResourceConfig getConfiguration() {
        return null;
    }

    @Override
    public void reload() {

    }

    @Override
    public void reload(ResourceConfig configuration) {
    }

    private String updateResponse(final Response response) {
        String data = null;
        try {
            data = new Date().toLocaleString();
            response.getWriter().write(data);
        } catch (IOException e) {
            data = "Unknown error from our server";
            response.setStatus(500, data);
        }

        return data;
    }

}

package com.grizzly.http;

import java.net.URI;

import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.http.server.ServerConfiguration;

/**
 * @author smc
 */
public class IHttpServerFactory {

    private static final int DEFAULT_HTTP_PORT = 80;

    public static HttpServer createHttpServer(URI uri, IHttpHandler handler) {

        final String host = uri.getHost() == null ? NetworkListener.DEFAULT_NETWORK_HOST
            : uri.getHost();
        final int port = uri.getPort() == -1 ? DEFAULT_HTTP_PORT : uri.getPort();

        final NetworkListener listener = new NetworkListener("IGrizzly", host, port);
        listener.setSecure(false);

        final HttpServer server = new HttpServer();
        server.addListener(listener);

        final ServerConfiguration config = server.getServerConfiguration();
        if (handler != null) {
            config.addHttpHandler(handler, uri.getPath());
        }

        config.setPassTraceRequest(true);
        return server;
    }
}

It seems the problem is the browser waiting for the first request to complete, and thus more a client-side than a server-side issue. It disappears if you test with two different browser processes, or even if you open two distinct paths (let's say localhost:7777/foo and localhost:7777/bar ) in the same browser process (note: the query string partecipates in making up the path in the HTTP request line).

How I understood it

Connections in HTTP/1.1 are persistent by default, ie browsers recycle the same TCP connection over and over again to speed things up. However, this doesn't mean that all requests to the same domain will be serialized: in fact, a connection pool is allocated on a per-hostname basis ( source ). Unfortunately, requests with the same path are effectively enqueued (at least on Firefox and Chrome) - I guess it's a device that browsers employ to protect server resources (and thus user experience)

Real-word applications don't suffer from this because different resources are deployed to different URLs.

DISCLAIMER: I wrote this answer based on my observations and some educated guess. I think things may actually be like this, however a tool like Wireshark should be used to follow the TCP stream and definitely assert this is what happens.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM