简体   繁体   中英

How to send 2 TCP packets in c++

is it possible to send 2 consecutive TCP packets out as seen in this picture here: 在此处输入图片说明

I currently have set TCP_NODELAY to true and SO_SNDBUF to 0. I have also called send in my program 2x. This is the result I obtained:

在此处输入图片说明

The main issue here will be the delayed-ack causing the slow network performance in the 2nd screenshot.

The code for the server:

DWORD WINAPI ServerHandler(void *lp){
    //The port you want the server to listen on
    int host_port = 1852;

    //Initialize socket support WINDOWS ONLY!
    unsigned short wVersionRequested;
    WSADATA wsaData;
    int err;
    wVersionRequested = MAKEWORD( 2, 2 );
    err = WSAStartup( wVersionRequested, &wsaData );
    if ( err != 0 || ( LOBYTE( wsaData.wVersion ) != 2 || HIBYTE( wsaData.wVersion ) != 2 )) 
    {
        printf("Could not find useable sock dll %d\n",WSAGetLastError());
        return 0;
    }

    //Initialize sockets and set any options
    int hsock;
    BOOL bOptVal = true;
    int bOptLen = sizeof (BOOL);
    int iResult = 0;

    hsock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
    if(hsock == INVALID_SOCKET)
    {
        printf("Error initializing socket %d\n",WSAGetLastError());
        return 0;
    }

    iResult = setsockopt(hsock, SOL_SOCKET, SO_REUSEADDR, (char *) &bOptVal, bOptLen);
    if (iResult == SOCKET_ERROR)
        printf("setsockopt for SO_REUSEADDR failed with error: %d\n", WSAGetLastError());
    else
        printf("Set SO_REUSEADDR: ON\n");

    iResult = setsockopt(hsock, SOL_SOCKET, SO_KEEPALIVE, (char *) &bOptVal, bOptLen);
    if (iResult == SOCKET_ERROR)
        printf("setsockopt for SO_KEEPALIVE failed with error: %d\n", WSAGetLastError());
    else
        printf("Set SO_KEEPALIVE: ON\n");

    //Bind and listen
    struct sockaddr_in my_addr;

    my_addr.sin_family = AF_INET ;
    my_addr.sin_port = htons(host_port);

    memset(&(my_addr.sin_zero), 0, 8);
    my_addr.sin_addr.s_addr = INADDR_ANY ;

    if( bind( hsock, (struct sockaddr*)&my_addr, sizeof(my_addr)) == SOCKET_ERROR )
    {
        printf("Error binding to socket, make sure nothing else is listening on this port %d\n",WSAGetLastError());
        closesocket(hsock);
        return 0;
    }
    if( listen( hsock, MAXCONN) == SOCKET_ERROR )
    {
        printf("Error listening %d\n",WSAGetLastError());
        closesocket(hsock);
        return 0;
    }

    //Now lets to the server stuff

    int* csock;
    sockaddr_in sadr;
    int addr_size = sizeof(SOCKADDR);

    printf("waiting for a connection\n");

    while(true)
    {            
        csock = (int*)malloc(sizeof(int));
        if((*csock = accept( hsock, (SOCKADDR*)&sadr, &addr_size))!= INVALID_SOCKET )
        {
            printf("Received connection from %s, %u @ socket %d\n", inet_ntoa(sadr.sin_addr), sadr.sin_port, *csock);

            BOOL bOptVal = true;            
            int iResult = setsockopt(*csock, SOL_SOCKET, TCP_NODELAY, (char *) &bOptVal, sizeof(bOptVal));
            if (iResult == SOCKET_ERROR)
                printf("setsockopt for TCP_NODELAY failed with error: %d\n", WSAGetLastError());
            else
                printf("Set TCP_NODELAY: TRUE\n");

            int sendBuf = 0;
            iResult = setsockopt(*csock, SOL_SOCKET, SO_SNDBUF, (char *) &sendBuf, sizeof(sendBuf));
            if (iResult == SOCKET_ERROR)
                printf("setsockopt for SO_SNDBUF failed with error: %d\n", WSAGetLastError());
            else
                printf("Setsockopt for SO_SNDBUF set to 0\n");


            CreateThread(0,0,&SocketHandler, (void*)csock , 0,0);
        }
        else
        {
            printf("Error accepting %d\n",WSAGetLastError());
        }
    }
    WSACleanup();
}

The code I used for sending data:

int send_TCP_2(int cs, char responseLength[], char data[], int respond_length, int data_length)
{   
    int size = respond_length + data_length;
    int index = 0;

    // combined 10 byte response with data as 1 packet
    std::vector<char> packet(size);

    for(int i=0; i<respond_length; i++)
    {
        packet[index] = responseLength[i];
        index++;
    }

    for(int i=0; i<data_length; i++)
    {
        packet[index] = data[i];
        index++;
    }

    int status;
    char *data_ptr = &packet[0];
    while(size > 0)
    {
        status = send(cs, data_ptr, size, 0);
        if(status > 0)
        {
            data_ptr += status;
            size -= status;
        }
        else if (status == SOCKET_ERROR)
        {
            int error_code = WSAGetLastError();
            printf("send_TCP_2 failed with error code: %d\n", error_code);
            return 0;   // send failed
        }
    }
    return 1;   // send successful  
}

在此处输入图片说明

I have attached the screenshot when I do not disable Nagle and not touching SO_SNDBUF.

The main issue here will be the delayed-ack causing the slow network performance in the 2nd screenshot.

No it won't. You are mistaken about that. You don't have any control over TCP packetization, or rather segmentation, and you don't need it. TCP is a highly optimized stream transfer protocol developed over more than 30 years.

TCP_NODELAY option set to TRUE should fix the delayed_ACK problem. I once had to send the network packet twice but i did it in the ETHERNET (driver) layer and it worked there (the delayed_ACK was caused by the other party), but in this layer (SOCKET layer) you can not do such a thing. Also, do not set SO_SNDBUF to 0...

thanks for all the advise! Setting TCP_NODELAY to true will work as mentioned by most. I have made a silly mistake in setsockopt!

I should have put IPPROTO_TCP instead of SOL_SOCKET

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM