简体   繁体   中英

Set floating point precision for operations

I'm looking for a way to force the computer to calculate a floating-point operation with a set number of significant digits. This is for pure learning reasons, so I don't care about the loss of accuracy in the result.

For example, if I have:

float a = 1.67;
float b = 10.0;
float c = 0.01

float d = a * b + c;

And I want every number represented with 3 significant digits, I'd like to see:

d = 16.7;

Not:

d = 16.71;

So far, I got this as a possible answer: Limit floating point precision?

But it would bloat my code to turn every floating-point variable into one with the precision I want using that strategy. And then doing to the same with the result.

Is there an automatic way to fix the precision?

The floating point data types are binary floating points, ie, they have precision in terms of binary digits and it is actually impossible to represent the decimal values exactly in general. As a result, you will have some problems truncating the operations to the correct number of decimal places in the first place. What could work is to format a floating point value after each operation with a precision of n digits (eg with n == 3 ) and convert this back into a floating value. This won't be particularly efficient but would work. To avoid littering the code with the corresponding truncation logic, you would encapsulate the operations you need into a class which does the operation an appropriately truncates the result.

Alternatively, you could implement the necessary logic using a significand and a suitable base 10 exponent. The significant would be restricted to values between -999 and 999. It is probably more work to implement a class like this but the result is likely to be more efficient.

So far, I got this as a possible answer: Limit floating point precision?

Read the second answer, which received ten votes, rather than the accepted one, which only received four votes. Don't do it.

You don't want to do this when you do calculations on paper , let alone on a computer. Those intermediate calculations are best done to at least one extra significant digit, and preferably two or more, than the underlying data indicate. You truncate to the precision indicated by the data at the very end. The only reason we do this on paper is because people aren't that good at dealing with a lot of digits. It's a short circuit operation that is tuned to how people calculate (or miscalculate).

All that you are doing by rounding intermediate calculations accomplishes is to create an opening for errors to creep in and slowing the computer down, oftentimes by a quite a bit. Don't worry about the extra precision in those intermediate results. Simply use display the results to the desired precision on output.

The opposite problem sometimes does apply. You may need to worry about loss of precision in your intermediate results. Sometimes that loss of precision will mean changing from floats to doubles, or from doubles to variable precision arithmetic (which is slow ).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM