简体   繁体   中英

Can someone explain the difference in this c++ code?

I can't tell the effective difference between these exactly, the first one seems to work more robustly. The second one works with the adjustment, but has issues with incomplete multibyte strings, and when I remove the resize bytesWritten - 1, it doesn't work right at all. I would love to know why these work differently. Thanks!

First:

size_t maxBytes = JSStringGetMaximumUTF8CStringSize(str);
std::vector<char> buffer(maxBytes);
JSStringGetUTF8CString(str, buffer.data(), maxBytes);
return std::string(buffer.data());

Second:

std::string result;
size_t maxBytes = JSStringGetMaximumUTF8CStringSize(str);
result.resize(maxBytes);
size_t bytesWritten = JSStringGetUTF8CString(str, &result[0], maxBytes);
// JSStringGetUTF8CString writes the null terminator, so we want to resize
// to `bytesWritten - 1` so that `result` has the correct length.
result.resize(bytesWritten - 1);
return result;

It is not legal to write the character array of a std::string , not via c_str() , not via data() (at least until C++17) and especially not via getting the address of the first element as you did. Thats the difference, in the first one you use a std::vector<char> where all these things are allowed, the second code is just undefined behaviour. It has nothing to do with javascript core btw.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM