I have a function in my code that looks something like this:
bool foo(ObjectType& object){
//code
const float result = object.float_data + object.bool_data * integer_data;
//code
}
When looking for a bug I found that result
sometimes have incorrect values and the reason is that sometimes the bool
* integer
is calculated as 255*integer
instead of 1*integer
.
C++ says that in an integer context bool
gets converted to either zero or one so I don't understand this. I multiply with bool
in other parts in the code and it works fine. Also this is random: sometimes it is converted to 1
, sometimes to 255
. The debugger also shows true
or 255
, respectively. When this bug happens it always happens in that execution. Recompiling the code has no effect, it still keeps happening randomly.
As per C++17 7.14 Boolean conversions
, the forcing of a non-boolean to a boolean is part of the standard conversion process, meaning it's actually done when assigning some value to a boolean:
A prvalue of arithmetic, unscoped enumeration, pointer, or pointer to member type can be converted to a prvalue of type
bool
. A zero value, null pointer value, or null member pointer value is converted tofalse
; any other value is converted totrue
. For direct-initialization (11.6), a prvalue of typestd::nullptr_t
can be converted to a prvalue of typebool
; the resulting value isfalse
.
It is not guaranteed that the boolean will be zero or one if, for example, you don't initialise it, or you initialise it in such a way that it's being treated as a non-boolean:
void someFunction() {
bool xyzzy; // Set to some arbitrary value.
memcpy(&xyzzy, "x", 1); // Very contrived, wouldn't pass my
// own code review standards :-)
}
So the first thing I'd be looking at is ensuring you are initialising/assigning it correctly.
In fact, I very rarely nowadays bring a variable into existence without explicitly initialising it to something. Even if it's later changed before use, I'd rather rely on the compiler figuring that out, and optimising if it deems it useful.
But you can possibly fix this just by using boolean values as they were intended. In other words, as repositories of truth rather than evaluating them as integral values. Try this instead:
const float result = object.float_data + (object.bool_data ? integer_data : 0);
That will work whether bool_data
is:
integer_data
added to your float_result
); or Solved. The bool value was not initialized, so it had 255 (rubish) in it sometimes. I assumed that the bool is converted to 0 or 1 when it is being used in the integer context, instead it is converted to 0 or 1 when it is set from a value. (It was totally my mistake, sorry for wasting your time.)
To sum it up with code:
float float_data = 50.0f;
int integer_data = 30;
bool bool_data;
float result;
bool_data = 255; //OK
reinterpret_cast<int&>( bool_data) = 250; //NOT OK
result = float_data + (bool_data ? integer_data : 0); //OK
result = float_data + bool_data * integer_data; //NOT OK.
result = float_data + !!bool_data * integer_data; //NOT OK.
result = float_data + (bool_data == true) * integer_data; //NOT OK.
"OK", meaning it is still wrong but it is at least 0 or 1
You should inspect where the bool
is set. Notably, the C++ standard does not guarantee that any memory accessed as a bool
value will convert to 1
if the memory was not exactly 0
- it only guarantees that a bool
of value true
will convert to 1
. As mentioned here , it is possible for a bool
to have a value that is neither true
nor false
as a consequence of undefined behavior . That may seem obvious (UB after all) but it is also surprising (apparently even by the standard authors' standards).
Here is a practical demonstration of this, using memcpy
:
#pragma pack(1)
struct ObjectType
{
float float_data = -3.0f;
bool bool_data = false;
int integer_data = 2;
};
volatile unsigned char x = 7;
float test()
{
// Don't actually do this, it is for demonstration only.
std::array<unsigned char, sizeof(ObjectType)> data = { 0, 0, 0, 0, /**/ x, /**/ 2, 0, 0, 0 };
ObjectType obj;
memcpy(&obj, data.data(), sizeof(obj));
return foo(obj);
}
http://coliru.stacked-crooked.com/a/0221a82a6d35e18b <- runtime evaluation yields 14
http://coliru.stacked-crooked.com/a/982ff8e4d7503f08 <- compile-time evaluation yields 2
Indeed, inspecting the binary shows that foo
literally interprets the bool
as an integer and multiplies with it for gcc
. I would conclude that gcc
achieves standard conformance by only ever storing a 1
or 0
whenever it stores to a bool
, thus behaving as if true
is only ever converted to 1
(as long as you never force a bool
into a diferent state than true
or false
).
I think you should use the following code then execute it
bool foo(ObjectType& object){
//code
const float result = object.float_data + int(object.bool_data) * int(integer_data);
//code
}
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.