简体   繁体   中英

How can you safely perform blocking operations in a Netty channel handler?

I'm building a small Netty-based application that performs I/O operations across a socket connections (ie telnet/ssh). I am starting up my socket server with Netty's ServerBootstrap class, giving it:

  1. An event loop of type NioEventLoopGroup (ie a pool of shared threads that should not be subjected to blocking operations).

  2. A channel of type NioServerSocketChannel (I believe this is required to correspond with #1 above).

  3. A very simple pipeline, with a channel handler that extends ChannelInboundHandlerAdapter .

My handler's channelRead(...) method is called whenever a command string is received from a client socket connection, and returns some response string depending on the command.

Everything is fine for the commands which involve no blocking operations. However, there are SOME commands for which I now need to read from or write to a database. Those JDBC calls are inherently going to be blocking... although I could use a CompletableFuture (or whatever) to handle them in a separate thread.

But even if I did "roll-my-own async" by performing blocking operations in separate threads, I'm not sure how I would reconnect the results from those spawned threads back to the Netty channel handler in the main thread.

I see that the ChannelHandlerContext class has methods like:

ChannelFuture writeAndFlush(Object msg, ChannelPromise promise);

... as alternatives to the ones I'm currently using:

ChannelFuture writeAndFlush(Object msg);

But I can't find any documentation or guidance (or even helpful Javadocs) explaining how one might use this ChannelPromise type in this use case. It's name suggests that it might be relevant, but it might not be. After all, the writeAndFlush method still takes the outgoing message as its first parameter... so what good would it do to stuff your blocking operation into a "promise" second parameter, if you need its result to be already on-hand for the first parameter?

What is the right track here? Is there some way to handle blocking operations in separate threads, so that Netty's NioEventLoopGroup does not block? Or is this simply not how Netty works, and you should use a different event loop implementation (ie one that spawns a separate thread for each client socket connection) if you need to support blocking?

I'm building a small Netty-based application that performs I/O operations across a socket connections (ie telnet/ssh). I am starting up my socket server with Netty's ServerBootstrap class, giving it:

  1. An event loop of type NioEventLoopGroup (ie a pool of shared threads that should not be subjected to blocking operations).

  2. A channel of type NioServerSocketChannel (I believe this is required to correspond with #1 above).

  3. A very simple pipeline, with a channel handler that extends ChannelInboundHandlerAdapter .

My handler's channelRead(...) method is called whenever a command string is received from a client socket connection, and returns some response string depending on the command.

Everything is fine for the commands which involve no blocking operations. However, there are SOME commands for which I now need to read from or write to a database. Those JDBC calls are inherently going to be blocking... although I could use a CompletableFuture (or whatever) to handle them in a separate thread.

But even if I did "roll-my-own async" by performing blocking operations in separate threads, I'm not sure how I would reconnect the results from those spawned threads back to the Netty channel handler in the main thread.

I see that the ChannelHandlerContext class has methods like:

ChannelFuture writeAndFlush(Object msg, ChannelPromise promise);

... as alternatives to the ones I'm currently using:

ChannelFuture writeAndFlush(Object msg);

But I can't find any documentation or guidance (or even helpful Javadocs) explaining how one might use this ChannelPromise type in this use case. It's name suggests that it might be relevant, but it might not be. After all, the writeAndFlush method still takes the outgoing message as its first parameter... so what good would it do to stuff your blocking operation into a "promise" second parameter, if you need its result to be already on-hand for the first parameter?

What is the right track here? Is there some way to handle blocking operations in separate threads, so that Netty's NioEventLoopGroup does not block? Or is this simply not how Netty works, and you should use a different event loop implementation (ie one that spawns a separate thread for each client socket connection) if you need to support blocking?

It seems it's never allowed to run blocking operations on netty thread even when dedicated DefaultEventExecutorGroup is associated with a handler.

The problem with DefaultEventExecutorGroup(16) is that it is comprised of fixed number of DefaultEventExecutors, that means that the same DefaultEventExecutors can be assigned to multiple channels, and so blocking operation on one channel handler will freeze processing on all other channel handlers assigned to that same DefaultEventExecutor. You can develop a simple test to check that:

import io.netty.util.concurrent.DefaultEventExecutorGroup;

public class DefaultEventExecutorMain {
    public static void main(String... args) {
        var group = new DefaultEventExecutorGroup(2);

        var eventExecutor1 = group.next();
        var eventExecutor2 = group.next();
        var eventExecutor3 = group.next();

        eventExecutor1.execute(() -> {
            System.out.println("slow task start");
            try {
                Thread.sleep(60000);
            } catch (InterruptedException t) {
                throw new RuntimeException(t);
            }
            System.out.println("slow task end");
        });

        eventExecutor3.execute(() -> {
            System.out.println("fast task");
        });
    }
}

Slow task will make fast task wait because both are assigned to the same event executor which sticks to a single thread.

In theory it might be possible to develop custom event executor which won't stick to particular thread and will just ensure that all submitted tasks are executed sequentially. Something like this:

public class MyEventExecutor extends AbstractEventExecutor {

    private volatile Thread executorThread;

    private final Queue<Runnable> queue = new ArrayDeque<>();
    private final ReentrantLock lock = new ReentrantLock();
    private final Executor executor;
    private final Logger logger = Logger.getLogger(this.getClass().getName());

    private Task task;

    public MyEventExecutor(EventExecutorGroup parent, Executor executor) {
        super(parent);
        this.executor = executor;
    }

    @Override
    public void execute(Runnable runnable) {
        lock.lock();
        try {
            queue.add(runnable);

            if (task == null) {
                task = new Task();
                executor.execute(task);
            }
        } finally {
            lock.unlock();
        }
    }

    private class Task implements Runnable {
        @Override
        public void run() {
            Runnable r;

            lock.lock();
            try {
                r = queue.poll();
            } finally {
                lock.unlock();
            }

            executorThread = Thread.currentThread();
            try {
                r.run();
            } catch (Throwable t) {
                logger.log(Level.SEVERE, "Unhandled failure", t);
            } finally {
                executorThread = null;

                lock.lock();
                try {
                    if (queue.isEmpty()) {
                        task = null;
                    } else {
                        executor.execute(task);
                    }
                } finally {
                    lock.unlock();
                }
            }
        }
    }

And custom event executor group:

public class MyEventExecutorGroup extends AbstractEventExecutorGroup {

    private Executor executor;

    public MyEventExecutorGroup(Executor executor) {
        this.executor = executor;
    }

    @Override
    public EventExecutor next() {
        return new MyEventExecutor(this, executor);
    }

It's supposed that executor passed as param is thread pool executor that lets run multiple tasks in parallel.

But better never do this, just use custom thread pool executor to run blocking tasks out of netty thread.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM