简体   繁体   中英

Closing Reactor Netty connection on error status codes

I'm using Reactor Netty through the Spring Webflux framework in order to send data to a remote content delivery network. When a client request is completed, the default Reactor Netty behaviour is to keep the connection alive and release it back to the underlying connection pool.

Some content delivery networks recommend to re-resolve DNS on certain types of status codes (eg 500 internal server error). To achieve this, I've added a custom Netty DnsNameResolver and DnsCache , but I also need to close the connection, otherwise it will be released back to the pool and DNS will not be re-resolved.

How would one go about closing the connection on error status codes?

So far, I've come up with the following workaround by adding a ConnectionObserver to Reactor Netty's TcpClient :

TcpClient tcpClient = TcpClient.create()
        .observe((connection, newState) -> {
            if (newState == State.RELEASED && connection instanceof HttpClientResponse) {
                HttpResponseStatus status = ((HttpClientResponse) connection).status();
                if (status.codeClass() != HttpStatusClass.SUCCESS) {
                    connection.dispose();
                }
            }
        });

Namely, if the connection has been released (ie put back in the connection pool) and the release was caused by a HTTP client response with an unsuccessful status code, then close the connection.

This approach feels clunky. If the connection is released after an error status code, and the observer is closing that connection, can a new request acquire the same connection in parallel? Does the framework internally handle things gracefully or is this a race condition that invalidates the above approach?

Thanks in advance for your help!

It is better to use doOnResponse or doAfterResponseSuccess it depends on the use case which one is more appropriate.

However waiting on RELEASED should not be a problem

If the connection is released after an error status code, and the observer is closing that connection, can a new request acquire the same connection in parallel? Does the framework internally handle things gracefully or is this a race condition that invalidates the above approach?

The connection pool runs with FIFO leasing strategy by default, so if there are idle connections in the pool you will not obtain the same connection, which is not the case if you switch the connection pool to LIFO leasing strategy. When acquiring, every connection is checked whether it is active or not and only an active connection will be provided for use.

UPDATE :

You can try also the approach below which uses ONLY WebClient API and not Reactor Netty API:

return this.webClient
           .get()
           .uri("/500")
           .retrieve()
           .onStatus(status -> status.equals(HttpStatus.INTERNAL_SERVER_ERROR), clientResponse -> {
                clientResponse.bodyToFlux(DataBuffer.class)
                              .subscribe(new BaseSubscriber<DataBuffer>() {
                                  @Override
                                  protected void hookOnSubscribe(Subscription subscription) {
                                      subscription.cancel();
                                  }
                              });
                return Mono.error(new IllegalStateException("..."));
           })
           .bodyToMono(String.class);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM