简体   繁体   中英

Java serversocket not detecting lost connection

I have a socket client (on android phone) and server (on PC) both on a wifi network and the server successfully reads data from the client.

However, when I turn off the wifi on the phone the server read just hangs, whereas I was hoping some error would be thrown.

I do have setSoTimeout set on the server, but the read is not timing out.

On the PC netstat still shows an established connection

netstat -na | grep 6668

TCP 192.168.43.202:6668 192.168.43.26:43076 ESTABLISHED

Is there a way to tell if the client host has disappeared, or getting the read to time out?

Here is the server read

 if (ss.isConnected()) {
            try {
                readData();
            } catch (java.net.SocketTimeoutException ex) {
                logger.warning(ex.toString());
            } catch (InterruptedIOException ex) {
                logger.warning(ex.toString());
            } catch (IOException ex) {
                logger.log(Level.WARNING, "Data communication lost will close streams - IOEx - socket status {0}", ss.socketStatus());
                closeStreams();
            } catch (Exception ex) {
                logger.log(Level.WARNING, "Data communication lost will close streams - Ex - socket status {0}", ss.socketStatus());
                                    closeStreams();
            }

        }

Where readData is,

public void readData() throws IOException {
    for (int i = 0; i < data.length; i++) {
        data[i] = ss.readDouble();
    }
}

ss.readDouble() is,

public double readDouble() throws IOException {
    return in.readDouble();
}

And the server connection,

public void connect() throws IOException {
    if (serverSocket == null || serverSocket.isClosed()) {
        init();
    }
    logger.log(Level.INFO, "Wait on " + serverSocket.getLocalPort());
    server = serverSocket.accept();
    serverSocket.close();

    logger.log(Level.INFO, "Connected to {0}", server.getRemoteSocketAddress());
    out = new DataOutputStream(server.getOutputStream());
    in = new DataInputStream(server.getInputStream());
}

This is nature of TCP connection, not java sockets per se. If the remote peer disconects with broken connection, how should your server know that the peer simply has no data to send?

Writting on closed socket will cause exception, read will simply block if client doesnt end tcp connection properly, for the reason above.

If you go through socket API, you will find option to set timeout ( before proceeding with blocking operation).

You could also consider TCP KEEP Alive, which is also exposed by the Socket API.

// Edit: additional information as per the OP comment

When your client connects to server, you create a client socket to communicate with the peer. Your server socket is the one at which you are listening for new client connections. It is the client socket at which you specify keep alive or read timeout because this is the socket from which you read/write.

// your server is actually reference to ClientSocket
 server = serverSocket.accept();

// keep alive duh
    server.setKeepAlive(true);

serverSocket.close();

Make a timeout, so let's say no data has been sent for 10 minutes, close it in 60 seconds!

Setting a timeout for socket operations

The answer for this question may help you.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM