简体   繁体   中英

Using htons() in my code puts all zeros in the buffer and I don't understand why

I need to use htons() in my code to convert the little-endian ordered short to a network (big-endian) byte order. I have this code:

int PacketInHandshake::serialize(SOCKET connectSocket, BYTE* outBuffer, ULONG outBufferLength) {
    memset(outBuffer, 0, outBufferLength);
    const int sizeOfShort = sizeof(u_short);
    u_short userNameLength = (u_short)strlen(userName);
    u_short osVersionLength = (u_short)strlen(osVersion);
    int dataLength = 1 + (sizeOfShort * 2) + userNameLength + osVersionLength;
    outBuffer[0] = id;
    outBuffer[1] = htons(userNameLength);// htons() here
    printf("u_short byte 1: %c%c%c%c%c%c%c%c\n", BYTE_TO_BINARY(outBuffer[1]));
    printf("u_short byte 2: %c%c%c%c%c%c%c%c\n", BYTE_TO_BINARY(outBuffer[2]));
    for (int i = 0; i < userNameLength; i++) {
        outBuffer[1 + sizeOfShort + i] = userName[i];
    }
    outBuffer[1 + sizeOfShort + userNameLength] = htons(osVersionLength);// and here
    for (int i = 0; i < osVersionLength; i++) {
        outBuffer[1 + (sizeOfShort * 2) + userNameLength + i] = osVersion[i];
    }
    int result;
    result = send(connectSocket, (char*)outBuffer, dataLength, 0);
    if (result == SOCKET_ERROR) {
        printf("send failed with error: %d\n", WSAGetLastError());
    }
    printf("PacketInHandshake sent: %ld bytes\n", result);
    return result;
}

Which results in a packet like this to be sent:

在此处输入图像描述

As you see, the length indication bytes where htons() is used are all zeros, where they should be 00 07 and 00 16 respectively.

And this is the console output:

u_short byte 1: 00000000
u_short byte 2: 00000000
PacketInHandshake sent: 34 bytes

If I remove the htons() and just put the u_shorts in the buffer as they are, everything is as expected, little-endian ordered:

在此处输入图像描述

u_short byte 1: 00000111
u_short byte 2: 00000000
PacketInHandshake sent: 34 bytes

So what am I doing wrong?

Converting endianess of a 16 bit number and storing it in a byte array is trivial, there is no need for library functions. Assuming 32 bit CPU:

uint16_t u16 = ...;
uint8_t out[2];

out[0] = ((uint32_t)u16 >> 8) & 0xFFu;
out[1] = ((uint32_t)u16 >> 0) & 0xFFu;

The casts and u suffix are there as a good habits to block implicit promotion to int which is problematic in some cases, since it's a signed number.

Since shifts don't care about the underlying endianess, the above code works for both big-to-little and little-to-big conversions, as long as you go from one to the other.

This scales to 32 bit types as:

uint32_t u32 = ...;
uint8_t out[4];

out [0] = ((uint32_t)u32 >> 24) & 0xFFu;
out [1] = ((uint32_t)u32 >> 16) & 0xFFu;
out [2] = ((uint32_t)u32 >>  8) & 0xFFu;
out [3] = ((uint32_t)u32 >>  0) & 0xFFu;

htons converts integers. This is how I used to do this.

    *(int16_t *)(&outBuffer[1]) = htons(userNameLength);

Now I write (casts suppress compiler warnings):

    outbuffer[1] = (char)(userNameLength >> 8);
    outbuffer[2] = (char)userNameLength;

Either way works. I now use the second because there isn't an htonq .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM