简体   繁体   中英

Netty socket generate Close_wait status

My application java, is running in server, openning port and made connection with devices by socket. Everythings went run till a certain point, lot of connection stay in CLOSE_WAIT, even if my application finish the process on the packet I receive.

What I remark is that CPU began to use double ressource, open file are increasing, and number of CLOSE_WAIT status are increasing also.

In the wireshark, the packet sended which leave a CLOSE_WAIT status, we see that server didn't send FIN to the client.

PS: I'm on ubuntu 14.04 trusty server, I'm using Netty 3.10.1

Here is the code where I Made Pipeline :

@Override
public ChannelPipeline getPipeline() {
    ChannelPipeline pipeline = Channels.pipeline();
    if (resetDelay != null) {
        pipeline.addLast("idleHandler", new IdleStateHandler(GlobalTimer.getTimer(), resetDelay, 0, 0));
    }
    pipeline.addLast("openHandler", new OpenChannelHandler(server));
    if (loggerEnabled) {
        pipeline.addLast("logger", new StandardLoggingHandler());
    }
    addSpecificHandlers(pipeline);
    if (filterHandler != null) {
        pipeline.addLast("filter", filterHandler);
    }
    if (reinitializeHandler != null) {
        pipeline.addLast("reinitialize", reinitializeHandler);
    }
    if (refineHandler != null) {
        pipeline.addLast("refine", refineHandler);
    }
    if (noFilterHandler != null) {
        pipeline.addLast("nofilter", noFilterHandler);
    }
    if (specificFilterHandler != null) {
        pipeline.addLast("specificfilter", specificFilterHandler);
    }
    if (reverseGeocoder != null) {
        pipeline.addLast("geocoder", new ReverseGeocoderHandler(reverseGeocoder, processInvalidPositions));
    }
    pipeline.addLast("handler", new TrackerEventHandler(dataManager));
    return pipeline;
}

CLOSE_WAIT means your program is still running, and hasn't closed the socket (and the kernel is waiting for it to do so). Add -p to netstat to get the pid, and then kill it more forcefully (with SIGKILL if needed). That should get rid of your CLOSE_WAIT sockets. You can also use ps to find the pid.

SO_REUSEADDR is for servers and TIME_WAIT sockets, so doesn't apply here.

Refer to this thread for other responses.

Check your application code to check if you are explicitly calling close on all the sockets created once you are done handling the client session.

Your application is leaking sockets by failing to close them. So close them. All of them. In finally blocks. When you read end of stream or get any IOException operatng on the socket.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM