繁体   English   中英

netty 的非阻塞线程模型是如何工作的

[英]How does netty's non blocking threading model work

目前,我正在阅读Tomasz Nurkiewicz 的“Reactive Programming with RxJava”一书。 在第 5 章中,他比较了构建 HTTP 服务器的两种不同方法,其中一种方法基于netty framework

而且我无法弄清楚与每个请求一个线程阻塞 IO 的经典方法相比,使用这样的框架如何有助于构建响应更快的服务器。

主要概念是使用尽可能少的线程,但如果有一些阻塞 IO 操作,例如数据库访问,这意味着一次可以处理非常有限的并发连接数

我从那本书中复制了一个例子。

初始化服务器:

public static void main(String[] args) throws Exception {
    EventLoopGroup bossGroup = new NioEventLoopGroup(1);
    EventLoopGroup workerGroup = new NioEventLoopGroup();
    try {
        new ServerBootstrap()
                .option(ChannelOption.SO_BACKLOG, 50_000)
                .group(bossGroup, workerGroup)
                .channel(NioServerSocketChannel.class)
                .childHandler(new HttpInitializer())
                .bind(8080)
                .sync()
                .channel()
                .closeFuture()
                .sync();
    } finally {
        bossGroup.shutdownGracefully();
        workerGroup.shutdownGracefully();
    }
}

工作组线程池的大小在我的机器上是availableProcessors * 2 = 8

为了模拟一些IO operation并能够看到日志中发生了什么,我向处理程序添加了1sec延迟(但它可能是一些业务逻辑调用):

class HttpInitializer extends ChannelInitializer<SocketChannel> {

    private final HttpHandler httpHandler = new HttpHandler();

    @Override
    public void initChannel(SocketChannel ch) {
        ch
                .pipeline()
                .addLast(new HttpServerCodec())
                .addLast(httpHandler);
    }
}

和处理程序本身:

class HttpHandler extends ChannelInboundHandlerAdapter {

    private static final Logger log = LoggerFactory.getLogger(HttpHandler.class);

    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) {
        ctx.flush();
    }

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        if (msg instanceof HttpRequest) {
            try {
                System.out.println(format("Request received on thread '%s' from '%s'", Thread.currentThread().getName(), ((NioSocketChannel)ctx.channel()).remoteAddress()));
            } catch (Exception ex) {}
            sendResponse(ctx);
        }
    }

    private void sendResponse(ChannelHandlerContext ctx) {
        final DefaultFullHttpResponse response = new DefaultFullHttpResponse(
                HTTP_1_1,
                HttpResponseStatus.OK,
                Unpooled.wrappedBuffer("OK".getBytes(UTF_8)));
        try {
            TimeUnit.SECONDS.sleep(1);
        } catch (Exception ex) {
            System.out.println("Ex catched " + ex);
        }
        response.headers().add("Content-length", 2);
        ctx.writeAndFlush(response);
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        log.error("Error", cause);
        ctx.close();
    }
}

客户端模拟多个并发连接:

public class NettyClient {

    public static void main(String[] args) throws Exception {
        NettyClient nettyClient = new NettyClient();
        for (int i = 0; i < 100; i++) {
            new Thread(() -> {
                try {
                    nettyClient.startClient();
                } catch (Exception ex) {
                }
            }).start();
        }
        TimeUnit.SECONDS.sleep(5);
    }

    public void startClient()
            throws IOException, InterruptedException {

        InetSocketAddress hostAddress = new InetSocketAddress("localhost", 8080);
        SocketChannel client = SocketChannel.open(hostAddress);

        System.out.println("Client... started");

        String threadName = Thread.currentThread().getName();

        // Send messages to server
        String[] messages = new String[]
                {"GET / HTTP/1.1\n" +
                        "Host: localhost:8080\n" +
                        "Connection: keep-alive\n" +
                        "Cache-Control: max-age=0\n" +
                        "Upgrade-Insecure-Requests: 1\n" +
                        "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36\n" +
                        "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3\n" +
                        "Accept-Encoding: gzip, deflate, br\n" +
                        "Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7"};

        for (int i = 0; i < messages.length; i++) {
            byte[] message = new String(messages[i]).getBytes();
            ByteBuffer buffer = ByteBuffer.wrap(message);
            client.write(buffer);
            System.out.println(messages[i]);
            buffer.clear();
        }
        client.close();
    }
}

预期-IMG

我们的情况是蓝线,唯一的区别是延迟设置为 0.1 秒而不是我上面解释的 1 秒。 有 100 个并发连接,我期望100 RPS因为有 90k RPS,100k 并发连接有 0.1 个延迟,如图所示。

实际- netty 一次只处理 8 个并发连接,等待 sleep 到期,再处理 8 个请求等等。 结果,完成所有请求大约需要 13 秒。 很明显,要处理更多的客户端,我需要分配更多的线程。

但这正是经典的阻塞 IO 方法的工作原理! 这里是服务器端的日志,你可以看到前 8 个请求被处理,一秒钟后又是 8 个请求

2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49466'
2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49465'
2019-07-19T12:34:10.792Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49464'
2019-07-19T12:34:10.793Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49463'
2019-07-19T12:34:10.799Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49462'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49467'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49461'
2019-07-19T12:34:10.803Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49460'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49552'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49553'
2019-07-19T12:34:11.799Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49554'
2019-07-19T12:34:11.801Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49470'
2019-07-19T12:34:11.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49475'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49559'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49468'
2019-07-19T12:34:11.806Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49469'

所以我的问题是- 具有非阻塞和事件驱动架构的 netty(或类似的东西)如何更有效地利用 CPU? 如果每个循环组只有 1 个线程,则管道将如下所示:

  1. ServerChannel 选择键设置为 ON_ACCEPT
  2. ServerChannel 接受一个连接并且 ClientChannel 选择键设置为 ON_READ
  3. 工作线程读取此 ClientChannel 的内容并传递给处理程序链。
  4. 即使 ServerChannel 线程接受另一个客户端连接并将其放入某种队列,在链中的所有处理程序完成其工作之前,工作线程也无法执行任何操作。 从我的角度来看,线程不能只是切换到另一个工作,因为即使等待来自远程数据库的响应也需要 CPU 滴答。

“具有非阻塞和事件驱动架构的 netty(或类似的东西)如何更有效地利用 CPU?”

这不可以。

当使用任务而不是线程作为并行工作单元时,异步(非阻塞和事件驱动)编程的目标是节省核心内存。 这允许有数百万个而不是数千个并行活动。

CPU 周期无法自动保存 - 这始终是一项智力工作。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM