简体   繁体   中英

How can I set floating point precision to avoid floating point arithmetic error in Python?

I keep running into a floating point arithmetic error in Python that I can't seem to figure out.

Problem: I need to create a weighting such that all weights sum to 1, not, for example: 0.99999999999999.

As an example, the following code:

values = numpy.array([9626.40000000034,      0. ,      0. ,      0. ,      0. ,      0. ,
            0. ,      0. ,  36907.300000000000054])
weights = values/values.sum()
weights.sum()

yields:

0.99999999999999989

Instead of 1. I have tried multiplying by 1000, converting to string (to cut off precision), and then converting back to float and dividing by 1000. It doesn't work. I have also tried using Decimal.

from decimal import *
string_weight = []
float_weight = []
getcontext().prec = 3
for number in weights:
    string_weight.append(Decimal(str(number)))
for string in string_weight:
    float_weight.append(float(string))
fuel_weights = numpy.array(fuel_weights_float)
fuel_weights.sum()  

The answer is:

1.0009999999999999

That is not what I want. I just want a simple “1.0”.

A sys.version report gives:

3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46) 
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]

I'm working on Mac OS X Catalina.

The solution to this binary arithmetic problem is to use Decimal and here is how to use it properly.

First, let me share a cleaner example of the problem.

import numpy
from decimal import *

# creating dummy values and weights
values = 1e-10 * numpy.ones(5)
weights = values/values.sum()
weights.sum()

yields:

0.9999999999999999

When I applied Decimal to problem-solve this error, I ran into a nuance that wasn't intuitive to me: Passing the result of a division operation through Decimal doesn't do anything to the binary error, ie:

getcontext().prec = 5
Decimal(values[0])

yields:

Decimal('1.0000000000000000364321973154977415791655470655996396089904010295867919921875E-10')

In order for Decimal to correct for the binary arithmetic error, Decimal has to be included in the division operation as follows.

getcontext().prec = 5
Decimal(1)/Decimal(7)

Yielding the same answer as described on the Python docs website :

Decimal('0.14286')

In my case, the correct application looks something like this.

weights_list = []
values = 1e-10 * numpy.ones(5)
sum_values = values.sum()
for value in values:
    getcontext().prec = 5
    weight = Decimal(value)/Decimal(sum_values)
    weights_list.append(weight)
weights = numpy.array(weights_list)    
weights.sum()

with the result is the correct mathematical answer as opposed to the binary arithmetic answer.

Decimal('1.0000')

which can be converted to a number using numpy.float().

I am adding another answer in place of editing my previous answer because I discovered that the earlier fix doesn't work for more complex cases, like mine. The sum of the weighted values is 1.0 in Decimal type but produces the same error (ie, not summing to 1.0) when array elements are first converted to float using numpy.float() and then added. It's not entirely clear to me why the other case doesn't work, but I am very pleased to be able to share a much more simple solution.

Following the names used in my previous post/answer, it turns out that the solution is:

precision = 5
weights = numpy.around(
        values / sum_values,
        decimals = precision
    )

Using the above, weights.sum() = 1.0, which is the mathematically correct solution.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM