简体   繁体   中英

Why would someone use multiplication by float instead of division

Reading code from a program I'm newly contributing to, I found out that division was almost never used, in favor or multiplication by floats.

One exemple would be when trying to average to floats such as:

float a = 0.42;
float b = 0.666;

float c = (a + b) * 0.5;

Clearly the intent is to divide by two to average a and b

However, altho simple I find the use of * 0.5 slightly harmful for readability (especially semantic) compared to the following:

float c = (a + b) / 2;

Which produce the exact same result.

Is there any reason why I would want to use * 0.5 instead of / 2 in this case ?

Proposed duplicate indicate that multiplication is faster; which becomes obviously false if any optimisation level is used (And yes, we do compile with optimisations)

Question is about c++ but other languages answers could be helpful too.

For most cases this just comes down to personal preference or general coding style. Say we are changing values a lot just to fiddle around with the output values when messing with a function to see how it behaves. Then, since the values may change, multiplying by a float makes it easier to just change to a number that wouldn't make sense to write as division.

Also, one advantage to multiplying by a float is it can help avoid the truncation towards zero that can mistakenly happen when dividing two ints.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM