简体   繁体   中英

Convert C++ type int16_t to int64_t without modifying the underlying binary

I am trying to generate a hash code for an object in 3D space so it can be quickly found in an array using a binary search algorithm.

Since each object in this array has a unique XYZ location, I figured I could use those three values to generate the hash code. I used the following code to try and generate the hash code.

int64_t generateCode(int16_t x, int16_t y, int16_t z) {
    int64_t hashCode = z;//Set Z bits.
    hashCode <<= 16;//Shift them 16 bits.
    hashCode |= y;//Set Y bits.
    hashCode <<= 16;//Shift them 16 bits.
    hashCode |= x;//Set X bits.
}

Now here is the problem from what I can tell. Consider the following peace of code:

int16_t x = -1;
cout << "X: " << bitset<16>(x) << endl;//Prints the binary value of X.
int64_t y = x;//Set Y to X. This will automatically cast the types.
cout << "Y: " << bitset<64>(y) << endl;//Prints the binary value of Y.

The output of this program is as follows:

X: 1111111111111111
Y: 1111111111111111111111111111111111111111111111111111111111111111

It keeps the numerical value of the number, but changes the underlying binary to do that. I don't want to modify that binary so I can have an output like the following:

X: 1111111111111111
Y: 0000000000000000000000000000000000000000000000001111111111111111

By doing that, I can then create a unique hash code from the XYZ values that would look like the following:

           Unused            X                 Y                 Z
HashCode: [0000000000000000][0000000000000000][0000000000000000][0000000000000000]

And that will be used for the binary search.

Most compilers will understand and optimize this to do what you actually want:

int16_t a[4] = { 0, z, y, x };
int64_t res;
memcpy(&res, a, sizeof(res));

(The compiler will understand that memcpy can be done by a simple 64-bit memory operation, and not actually calling the real memcpy )

Convert the int16_t to a uint16_t first, then merge them together into a uint64_t that you finally cast to a int64_t :

int64_t generateCode(int16_t x, int16_t y, int16_t z) {
    uint64_t hashCode = static_cast<uint16_t>(z);
    hashCode <<= 16;
    hashCode |= static_cast<uint16_t>(y);
    hashCode <<= 16;
    hashCode |= static_cast<uint16_t>(x);
    return static_cast<int64_t>(hashCode);
}

The int16_t / int64_t types will be a two's complement representation (7.20.1.1 paragraph 1 of the C standard requires this), so converting them to a uint*_t of the same size will be a bit-wise no-op.

Try int64_t y = (uint16_t) x;

What this does is, it'll make sure the added extra bits are 0 and not 1, since this is unsigned. Make sure you check the sign bit though.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM