简体   繁体   中英

Shutting down a netty TcpClient channel in Reactor 3.0.4

I just upgraded projectreactor.io from reactor OLD : [core: 3.0.1 .RELEASE, netty: 0.5.2 .RELEASE] to reactor NEW [core: 3.0.4 .RELEASE, netty: 0.6.0 .RELEASE].

I open a TcpClient connection and want to close it later.

In the OLD version I used

tcpClient.shutdown();

to disconnect my client from the server.

Is there an equivalent call in the NEW version? I could not find one!

I tried the following on both the NettyInbound and NettyOutbound that I get while creating my TcpClient with tcpClient.newHandler(...)

  • .context().dispose()
  • .context().channel().disconnect()
  • .context().channel().close()
  • TcpResources.reset()

None of them seem to do the job correctly.

I noticed that the respective .context().onClose(...) -callback is being called. But after some additional waiting the server-side checks the connections. Server-side is plain NIO2 not reactor/netty and while the client was upgraded, the server-side remained unchanged.

With the OLD client I got .isOpen() == false for every channel on server-side.

With the NEW client I get .isOpen() == true for every channel on server-side. Most of the time I can even write to the channel. And some channels switch to .isOpen() == false after writing few bytes.

This deserves an issue I think especially if channel().close() and reset() didn't work. Otherwise it might be due to the default pooling and TcpClient.create(opts -> opts.disablePool()) might help, let us know and if you have a chance to post an issue on http://github.com/reactor/reactor-netty you would be a hero :D

Linked to this open issue https://github.com/reactor/reactor-netty/issues/15 . We will review the dispose API.

The following code somehow destroys the channel but not completely.

ChannelFuture f = nettyContext.channel().close();
f.sync();

nettyContext.dispose();

The problem is that the channel still seems to be open on server-side. For a NIO2-based server, the server should not test if the channel isOpen(). It's always true.

As a dirty workaround, the server must write to the channel twice . If it catches an ExecutionException on the second write then the channel was already closed by the Netty-TcpClient.

try {
    channel.write(ByteBuffer.wrap("hello".getBytes())).get();
    // No exception on first write but on second write.
    channel.write(ByteBuffer.wrap("bye".getBytes())).get();
} catch (ExecutionException e) {
    LOG.log(Level.SEVERE, "ExecutionException on writing from server into channel", e);
}

With reactor-core: 3.1.0.M3 and reactor-netty: 0.7.0.M1 the client-API was improved and works more reliable.
After blockingNettyContext.shutdown() I still need the following workaround on server-side to make sure the channel was closed:
I need to write into the channel and close it on exception:

// channel.isOpen == true
try {
    channel.write(ByteBuffer.wrap("__test__".getBytes())).get();
} catch (ExecutionException e) {
    channel.close();
}
// channel.isOpen == false

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM