简体   繁体   中英

Why aren't ServerSocket connections rejected when backlog is full?

Disinterested curiosity...

In Java I listen on a socket, with backlog of 1:

ServerSocket ss = new ServerSocket(4000, 1);

In shells I run

netcat localhost 4000

many times - 5 so far.

The connections are never rejected. Every instance of netcat sits and waits until my ServerSocket is destroyed.

Backlog length is 1 - that means it should only let one incoming connection queue up, and then reject, does it not? ((I don't know if the queue includes the first one - not important right now.))

I know I can make this work by closing the ServerSocket (and then opening another one when I'm ready), but... shouldn't it work anyway?

Have I misunderstood?

As I wrote here , quoted above,

This behaviour is platform-dependent. Windows issues an RST when the backlog fills up, which results in 'connection refused'. Unix, Linux just drop the SYN packet.

NB Backlog length isn't 1. The platform can adjust it up or down. The smallest minimum backlog length in history was five, in early BSD releases. It is now fifty or even five hundred on some platforms.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM