简体   繁体   中英

Matlab/Octave/Numpy numeric difference

I'm porting an algorithm that works in Matlab to numpy and observed a strange behavior. The relevant segment of code is

P = eye(4)*1e20;
A = [1 -0.015 -0.025 -0.035; 0.015 1 0.035 -0.025; 0.025 -0.035 1 0.015; 0.035 0.025 -0.015 1];
V1 = A*(P*A')
V2 = (A*P)*A'

This code, when I run with Matlab, provides the following matrices:

V1 = 1.0021e+20            0  -8.0000e+00            0
              0   1.0021e+20            0            0
    -8.0000e+00            0   1.0021e+20            0
              0            0            0   1.0021e+20


V2 = 1.0021e+20            0  -8.0000e+00            0
              0   1.0021e+20            0            0
    -8.0000e+00            0   1.0021e+20            0
              0            0            0   1.0021e+20

Note that V1 and V2 are the same, as expected.

When the same code runs in Octave, it provides:

V1 = 1.0021e+20   4.6172e+01  -1.3800e+02   1.8250e+02
    -4.6172e+01   1.0021e+20  -1.8258e+02  -1.3800e+02
     1.3801e+02   1.8239e+02   1.0021e+20  -4.6125e+01
    -1.8250e+02   1.3800e+02   4.6125e+01   1.0021e+20

V2 = 1.0021e+20  -4.6172e+01   1.3801e+02  -1.8250e+02
     4.6172e+01   1.0021e+20   1.8239e+02   1.3800e+02
    -1.3800e+02  -1.8258e+02   1.0021e+20   4.6125e+01
     1.8250e+02  -1.3800e+02  -4.6125e+01   1.0021e+20

In numpy, the segment becomes

from numpy import array, dot, eye
A = numpy.array([[1, -0.015, -0.025, -0.035],[0.015, 1, 0.035, -0.025],[0.025, -0.035, 1, 0.015],[0.035, 0.025, -0.015, 1]])
P = numpy.eye(4)*1e20
print numpy.dot(A,numpy.dot(P,A.transpose()))
print numpy.dot(numpy.dot(A,P),A.transpose())

which outputs

[[  1.00207500e+20   4.61718750e+01  -1.37996094e+02   1.82500000e+02]
 [ -4.61718750e+01   1.00207500e+20  -1.82582031e+02  -1.38000000e+02]
 [  1.38011719e+02   1.82386719e+02   1.00207500e+20  -4.61250000e+01]
 [ -1.82500000e+02   1.38000000e+02   4.61250000e+01   1.00207500e+20]]
[[  1.00207500e+20  -4.61718750e+01   1.38011719e+02  -1.82500000e+02]
 [  4.61718750e+01   1.00207500e+20   1.82386719e+02   1.38000000e+02]
 [ -1.37996094e+02  -1.82582031e+02   1.00207500e+20   4.61250000e+01]
 [  1.82500000e+02  -1.38000000e+02  -4.61250000e+01   1.00207500e+20]]

So, both Octave and numpy provides the same answer, but it's very different from Matlab's. The first point is that V1 != V2, which doesn't seem right. The other point is that, although the non-diagonal elements are many orders of magnitude smaller than the diagonal ones, this seems to be causing some problem in my algorithm.

Does anyone knows way numpy and Octave behave this way?

They use doubles internally, which have only about 15 digits precision. Your math operations likely exceed this, which causes the mathematically wrong results.

Worth a read: http://floating-point-gui.de/

Edit: From the docs I gather that there is no higher precision available for Numpy. It seems that SymPy though may give you the needed precision - if that library works for you as well.

For whatever it's worth, I get identical results to matlab on a 64-bit system:

[[  1.00207500e+20   0.00000000e+00  -8.00000000e+00   0.00000000e+00]
 [  0.00000000e+00   1.00207500e+20   0.00000000e+00   0.00000000e+00]
 [ -8.00000000e+00   0.00000000e+00   1.00207500e+20   0.00000000e+00]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00   1.00207500e+20]]
[[  1.00207500e+20   0.00000000e+00  -8.00000000e+00   0.00000000e+00]
 [  0.00000000e+00   1.00207500e+20   0.00000000e+00   0.00000000e+00]
 [ -8.00000000e+00   0.00000000e+00   1.00207500e+20   0.00000000e+00]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00   1.00207500e+20]]
[[  1.00207500e+20   0.00000000e+00  -8.00000000e+00   0.00000000e+00]

If you're using a 32-bit system (or you have a 32-bit version of python and numpy installed on a 64-bit system) you'll run into precision problems, and get a different answer, as noted by @Lucero below. You might try explicitly specifying 64-bit floats in that case (the operations will be slower, though). For example, try using np.array(..., dtype=np.float64) .

If you think you need additional precison, you can use np.longdouble (Same as np.float128 on a 64-bit system and np.float96 on 32-bit), but this may not be supported on all platforms and many linear algebra functions will truncate things back to the native precision.

Also, what BLAS library are you using? The numpy and octave results are likely the same because they're using the same BLAS library.

Finally, you can simplify your numpy code down to:

import numpy as np
A = np.array([[1,     -0.015, -0.025, -0.035],
              [0.015, 1,      0.035,  -0.025],
              [0.025, -0.035, 1,      0.015],
              [0.035, 0.025,  -0.015, 1]])
P = np.eye(4)*1e20
print A.dot(P.dot(A.T))
print A.dot(P).dot(A.T)

np.einsum is a lot closer to the MATLAB

In [1299]: print(np.einsum('ij,jk,lk',A,P,A))
[[  1.00207500e+20   0.00000000e+00  -5.07812500e-02   0.00000000e+00]
 [  0.00000000e+00   1.00207500e+20   5.46875000e-02   0.00000000e+00]
 [ -5.46875000e-02   5.46875000e-02   1.00207500e+20   0.00000000e+00]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00   1.00207500e+20]]

The off diagonal terms in row and column 2 are different, but it has the same 0s elsewhere.

With the double dot, P.dot(AT) creates rounding errors when it adds the products. Those propagate into the next dot . einsum generates all the products, with just one summation. I suspect the MATLAB interpreter recognizes this situation, and performs a special calculation designed to minimize the rounding errors.

Numpy Matrix Multiplication U*B*UT Results in Non-symmetric Matrix - is a more recent question with the same explanation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM