简体   繁体   中英

numpy matrix inversion rounding errors

I am getting a very strange value for my (1,1) entry for my BinvA matrix
I am just trying to invert B matrix and do a (B^-1)A multiplication.

I understand that when I do the calculation by hand my (1,1) is supposed to be 0 but instead I get 1.11022302e-16. How can I fix it? I know floating point numbers can't be represented to full accuracy but why is this giving me such an inaccurate response and not rounding to 0 is there any way I can make it more accurate?

Her is my code:

import numpy as np

A = np.array([[2,2],[4,-1]],np.int)
A = A.transpose()


B = np.array([[1,3],[-1,-1]],np.int)
B = B.transpose()

Binv = np.linalg.inv(B) #calculate the inverse

BinvA = np.dot(Binv,A) 
print(BinvA)

My print statement:

[[  1.11022302e-16  -2.50000000e+00]
 [ -2.00000000e+00  -6.50000000e+00]]

When you compute the inverse your arrays are converted in float64 , whose machine epsilon is 1e-15. The epsilon is the relative quantization step of a floating-point number.

When in doubt we can ask numpy information about a floating-point data type using the finfo function. In this case

np.finfo('float64')
finfo(resolution=1e-15, 
      min=-1.7976931348623157e+308, max=1.7976931348623157e+308, 
      dtype=float64)

So, technically, your value being smaller than eps is a very accurate representation of 0 for a float64 type.

If it is only the representation that bothers you, you can tell numpy to don't print small floating point numbers (1 eps or less from 0) with:

np.set_printoptions(suppress=True)

After that your print statement returns:

[[ 0.  -2.5]
 [-2.  -6.5]]

Note that this is a general numerical problem common to all the floating-point implementations. You can find more info about floating-point rounding errors on SO:

or on the net:

This isn't a complete answer, but it may point you in the right direction. What you really want are numpy arrays that use Decimals for math. You might reasonably think to try:

import numpy as np
from decimal import Decimal
A = np.array([[2,2],[4,-1]],np.int)
for i, a in np.ndenumerate(A):
    A[i] = Decimal(a)
    print type(A[i])

But alas, Decimals are not among the datatypes supported out of the box in numpy, so each time you try to jam a Decimal into the array, it re-casts it as a float.

One possibility would be to set the datatype, thus:

def decimal_array(arr):
    X = np.array(arr, dtype = Decimal)
    for i, x in np.ndenumerate(X): X[i] = Decimal(x)
    return X

A = decimal_array([[2,2],[4,-1]])
B = decimal_array([[1,3],[-1,-1]])

A = A.transpose()
B = B.transpose()
Binv = np.linalg.inv(B) #calculate the inverse

But now, if you

print Binv.dtype

you'll see that the inversion has recast it back to float. The reason is that linalg.inv (like many other functions) looks for B's "common_type," which is the scalar to which it believe it can force your array elements.

It may not be hopeless, though. I looked to see if you could solve this by creating a custom dtype, but it turns out that scalars (ints, floats, etc) are not dtypes at all. Instead, what you probably want to do is register a new scalar--that's the Decimal--as it says in the article on scalars . You'll see a link out to the Numpy C-API (don't be afraid). Search the page for "register" and "scalar" to get started.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM