简体   繁体   中英

Behavior of assigning uint16_t to an index of std::vector<uint8_t>

std::vector<uint8_t> v(4);
uint16_t cafe = 0xCAFE;
uint16_t babe = 0xBABE;
v[0] = cafe;
v[2] = babe;

The behavior I was going for would result in:

v[0] == 0xCA
v[1] == 0xFE
v[2] == 0xBA
v[3] == 0xBE

but instead I get:

v[0] == 0xFE
v[1] == 0x00
v[2] == 0xBE
v[3] == 0x00

What should I do to get the results I'm looking for?

The reason your code didn't work is that c++ converts the values to uint8_t , the type that your vector holds:

v[0] = (uint8_t)cafe; // conceptually

or

v[0] = (uint8_t)(cafe & 0xff); // conceptually

The following will do what you want:

v[0] = (uint8_t)((cafe >> 8) & 0xff);
v[1] = (uint8_t)((cafe >> 0) & 0xff);
v[2] = (uint8_t)((babe >> 8) & 0xff);
v[3] = (uint8_t)((babe >> 0) & 0xff);

If you have a big-endian machine, do not mind your code being unportable, and want to do some extreme performance optimization, do this:

*(uint16_t)&v[0] = cafe;
*(uint16_t)&v[2] = babe;

If you want those four elements to have those four values, set those four elements to those values. There's no mystery here, other than why you would expect some other way to work.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM