简体   繁体   中英

TCP Socket write error

What is wrong with this simple Java TCP server/client example?

First, start the server:

import java.io.OutputStream;
import java.net.ServerSocket;
import java.net.Socket;

public class Server {

    public static void main(String[] args) throws Throwable {
        ServerSocket ss = new ServerSocket(2345);
        Socket s = ss.accept();
        OutputStream os = s.getOutputStream();
        Thread.sleep(5000);
        for (int i = 0; i < 3; i++) {
            os.write("A".getBytes());
            os.flush();
            System.out.println("Written in cycle " + i);
        }
        os.close();
        s.close();

    }

}

Start the client and watch the server after that:

import java.net.Socket;

public class Client {

    public static void main(String[] args) throws Throwable{
        Socket s=new Socket("localhost",2345);
        s.close();
        System.out.println("Closed");
    }

}

The client socket is closed immediately. However, the socket write operation on the server fails in the second loop, ie the first write does not throw an exception.

This is the server's execution output:

Written in cycle 0
Exception in thread "main" java.net.SocketException: Software caused connection abort: socket write error
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:124)
    at Server.main(Server.java:16)

Your client connects and immediately closes. This is detected by the server which refuses to write into the "nowhere".

The following link matches the question 100% and explains what the problem is and how to solve it:

http://download.oracle.com/javase/1.5.0/docs/guide/net/articles/connection_release.html

So, to be more specific related to the above code, setting LINGER to 0 will solve the problem:

import java.net.Socket;

public class Client {

    public static void main(String[] args) throws Throwable {
        Socket s = new Socket("localhost", 2345);
        s.setSoLinger(true, 0);
        s.close();
    }

 }

When you write to a socket it has to pass that data to the remote system. If it were to wait for the remote system to wait to confirm every piece of data it would be many times slower. In any case this is not how TCP works.

If you need the remote socket to confirm it has received your data you need to add a protocol of your own to ensure every piece of data you send has been received.

Copied from comments as it answers your question.

Doesn't the client socket have to do something to receive the outputstream from the server? Maybe I'm just missing something so please correct me if I'm wrong, but why do you close the client socket immediately after creating it? This is the problem.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM