简体   繁体   中英

Winsock accept() returning WSAENOTSOCK (code 10038)

hope you're having a good day. Another socket issue, another day :)

I finally got MicroSoft Visual C++ (MSVC++) IDE installed, plus the Platform SDK, so I can compile winsock applications.

Missed a chunk of stuff here. In the ServerSocket::accept() function, it creates a new instance of ClientSocket and sets it's socket file descriptor to the one that was accept()ed, I have also checked there and it's recognizing that the descriptor is valid there as well.

In my ClientSocket::recv() function, I call (obviously) the recv() function out of the winsock library. The issue I am having is that the socket descriptor I am using is being recognized by recv() as invalid, but only on the server-side ClientSocket instance returned from my ServerSocket::accept() - The client-side ClientSocket instance has no problems. I inserted multiple debug statements, the descriptor is valid.

The weirdest bit about this is that if I compile this exact same code with MinGW gcc/g++ on windows, it runs fine! It's only using MSVC++ that this problem occurs.

string ClientSocket::recv(int bufsize) {
    if (!isConnected()) throw SocketException("Not connected.");

    cout << "SocketRecv: " << (sockfd == INVALID_SOCKET) << " " << sockfd << endl;
    vector<char> buffer(bufsize+1, 0);
    cout << "SocketRecv1: " << (sockfd == INVALID_SOCKET) << " " << sockfd << endl;
    int ret = ::recv(sockfd, &buffer[0], bufsize, 0);
    cout << "SocketRecv2: " << (sockfd == INVALID_SOCKET) << " " << sockfd << endl;

    // ret is apparently -1 because of "invalid" socket descriptor, but the
    // above statements print zero (false) on the (sockfd == INVALID_SOCKET) ! :\
    if (ret < 0) {
        #ifdef _WIN32
        switch((ret = WSAGetLastError())) {
        #else
        switch(errno) {
        #endif
            case DECONNREFUSED: // The 'd' prefix means _I_ defined it, i.e. from windows it's
                                // set to 'WSAECONNREFUSED', but from linux it's set to 'ECONNREFUSED'
                throw SocketException("Connection refused on recover.");
                break;
            case DENOTCONN:
                throw SocketException("Not connected.");
                break;
            case DECONNABORTED:
                throw SocketException("Software caused connection abort.");
                break;
            case DECONNRESET:
                throw SocketException("Connection reset by peer.");
                break;
            default:
                //usually this itoa() and char/string stuff isn't here... needed it in 
                //order to find out what the heck the problem was.
                char tmp[21];
                string tmp4 = "Unknown error reading socket. ";
                string tmp3 = tmp4 + itoa(ret, tmp, 10);
                //this throw keeps throwing "Unknown error reading socket. 10038"
                throw SocketException(tmp3); 
                break;
        }
    } else if (ret == 0) {
        connected = false;
        return "";
    }

    return &buffer[0];
}

Additional information: The socket is in blocking mode, ie has not been set to non-blocking. I have called WSAStartup() successfully. This is happening on the server side, on the ClientSocket instance returned from my ServerSocket::accept() (yes, I checked the descriptor there too - it's fine). The client side claims 'WSAECONNRESET (10054)' or 'WSAECONNABORTED (10053)'.

I can't think of anything else that could be wrong. The worst part is, it works fine using MinGW gcc/g++ on windows and linux both.

If you want to see the whole library, it's pasted at: (caution: 600+ lines!)
Socket.cxx - http://paste.pocoo.org/show/353725/
Socket.hxx - http://paste.pocoo.org/show/353726/

Thanks!!!

Update - As per Ben's solution, I am now using: void ServerSocket::accept(ClientSocket& sock); , and implementing as: ClientSocket mysock; server.accept(mysock); ClientSocket mysock; server.accept(mysock);

Thank you so much!!!

Looks like you're not following the Rule of Three . Any time you have a destructor, you need to write or disable both the copy-constructor and assignment operator.

In your example usage:

ClientSocket client = server.accept();

The variable client is copy-constructed from the return value. Then the destructor runs on the temporary variable, closing the socket.

In C++0x, you can add a move-constructor and cure this problem. For now, you should implement swap and use it:

ClientSocket client;
server.accept().swap(client);

Or pass client as a parameter of server.accept :

ClientSocket client;
server.accept(client);

You could write a moving copy-constructor for ClientSocket , in the style of auto_ptr , but I wouldn't recommend that. People don't expect a copy-constructor to steal resources.

Just because your socket variable is not set to INVALID_SOCKET does not mean the socket descriptor is valid from WinSock's perspective. Obviously, it is not, or else WinSock would not be complaining about it. The socket is being closed before you are able to call recv() (which is evident by the client side getting errors as well).

The root cause of this is because ServerSocket::accept() is returning a new ClientSocket instance by value. The compiler has to allocate a second copy of the object for the return value, but your ClientSocket class does not define any copy constructor. The original socket descriptor will be copied from the first ClientSocket instance to the second instance, then the original instance is freed upon exit, closing the socket before the second instance can ever use it. You need to define a copy constructor that takes ownership of the original socket descriptor and sets the original instance's descriptor to INVALID_SOCKET so its destructor cannot close the socket anymore.

Related to this, your ClientSocket class has a handle leak in it. You are calling both WSAStartup() and socket() inside of the ClientSocket constructor (which is not the best place for either of those calls). When ServerSocket::accept() accepts a new client, you are calling ClientSocket::setFd() with the new socket descriptor, which replaces the original socket descriptor that was allocated in the ClientSocket constructor without freeing it correctly. You should define a second ClientSocket constructor that accepts an existing socket description as input, and then have that constructor call setFd() instead of socket() . That will eliminate the leak, and then the copy constructor can take ownership of this single allocated socket descriptor when needed.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM