简体   繁体   中英

How to detect a connection failure in Indy TCP Client

I have made a client and a server using Indy TIdTCPClient and TIdTCPServer in C++Builder 11 Alexandria.

I can start the server and connect the client to it correctly, but if I set the server MaxConnections to a value N and I try to connect to it with the N+1 client, the connection does not fail, apparently.

For example: I set MaxConnections=2 in the server, the first client connects to it and the server OnConnect event is raised, while in the client OnStatus event I get two messages:

message 1: Connecting to 10.0.0.16.
message 2: Connected.

I try to connect the second client: the server OnConnect event is NOT raised (and this is what I expect) but in the client OnStatus event I get the same two messages (and this is not what I expect):

message 1: Connecting to 10.0.0.16.
message 2: Connected.

Then, the first client can exchange data with the server, and the second client can't (this seems right).

I don't understand why the second client connection does not fail explicitly, am I doing something wrong?

You are not doing anything wrong. This is normal behavior for TIdTCPServer .

There is no cross-platform socket API at the OS level 1 to limit the number of active / accepted connections on a TCP server socket, only to limit the number of pending connections in the server's backlog. That limit is handled by the TIdTCPServer::ListenQueue property, which is 15 by default (but this is more of a suggestion than a hard limit, the underlying socket stack can override this, if it wants to).

As such, the TIdTCPServer::MaxConnections property is implemented by simply accepting any client from the backlog that attempts to connect, and then immediately disconnects that client if it exceeds the MaxConnections limit.

So, if you try to connect more clients to TIdTCPServer than MaxConnections allows, those extra clients will not see any failure in connecting (unless the backlog fills up), but the server will not fire the OnConnect event for them. From the clients' perspectives, they actually did connect successfully, they were fully accepted by the server's underlying socket stack (the TCP 3way handshake is complete). However, they simply would not have processed the disconnect yet. As soon as they try to actually communicate with the server, they will then detect the disconnect, usually in the form of an EIdConnClosedGracefully exception (but that is not guaranteed).

1: on Windows only, there is a WSAAccept() function which has a callback that can reject pending connections before they leave the backlog queue. But Indy does not make use of this callback at this time.

Different TCP stacks exhibit different behavior. Your description is consistent with a TCP stack that simply ignores SYNs to a socket that has reached the maximum configured limit of pending and/or accepted connections: the SYN packet is simply dropped on the floor and not acknowledged.

The nature of TCP is that it's supposed to handle network drops. The sender does not immediately bail out, but will keep trying to connect, for some period of time. This part is consistent with all TCP implementations.

If you want your client to quickly fail a connection that does not get established within some set period of time you'll need to implement a manual timeout yourself.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM