简体   繁体   中英

How to detect wrong subtractions of signed and unsigned integer in C++?

I have legacy code performing a subtraction of signed int with an unsigned int and a cast of the result to a float. It was giving expected result with Visual Studio 6 to 2013. With Visual Studio 2017 (15.6.3) the result is not the expected one. I have simplified the code to this:

    unsigned int uint = 10;
    signed int sint = 9;
    signed int res = sint - uint;
    float fres = static_cast<float>(sint - uint);

res value is -1 with all the VS I have tested. With VS 2013 and before, fres value is -1 . With VS 2017, fres value is 4.29496730e+09 , that is to say UINT_MAX . I have found here that the fres result in VS 2017 is the one conforming to the C++11 standard (if I correctly understand). VS 2017 compiler is not issuing any warning on this.

How can I detect all the occurrences of such a bad subtraction in my code base?

MSVC is not able to detect this even with /W4 /c or /Wall and additional linter is required, eg clang-tidy is detecting this (courtesy to Stephen Newell ).

When using g++ compiler, you are looking for -Wsign-conversion compiler option.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM