简体   繁体   English

Matlab和Python中矩阵乘法和求幂的结果不同

[英]Different results in Matlab and Python for matrix multiplication and exponentiation

While migrating to Python from Matlab, I get different results for matrix multiplication and exponentiation. 从Matlab迁移到Python时,矩阵乘法和求幂得到了不同的结果。

This is a simple softmax classifier implementation. 这是一个简单的softmax分类器实现。 I run the Python code, export the variables as a mat file, and run the original Matlab code, load the variables exported from Python, and compare them. 我运行Python代码,将变量导出为mat文件,然后运行原始Matlab代码,加载从Python导出的变量,然后进行比较。

Python code: Python代码:

f = np.array([[4714, 4735, 4697], [4749, 4748, 4709]])
f = f.astype(np.float64)
a = np.array([[0.001, 0.001, 0.001], [0.001, 0.001, 0.001], [0.001, 0.001, 0.001]])

reg = f.dot(a)
omega = np.exp(reg)
sumomega = np.sum(omega, axis=1)

io.savemat('python_variables.mat', {'p_f': f,
                                    'p_a': a,
                                    'p_reg': reg,
                                    'p_omega': omega,
                                    'p_sumomega': sumomega})

Matlab code: Matlab代码:

f = [4714, 4735, 4697; 4749, 4748, 4709];
a = [0.001, 0.001, 0.001; 0.001, 0.001, 0.001; 0.001, 0.001, 0.001];

reg = f*a;
omega = exp(reg);
sumomega = sum(omega, 2);
load('python_variables.mat');

I compare the results by checking out the following: 我通过检查以下内容来比较结果:

norm(f - p_f) = 0
norm(a - p_a) = 0
norm(reg - p_reg) = 3.0767e-15
norm(omega - p_omega) = 4.0327e-09
norm(omega - exp(p_f*p_a)) = 0

So the difference seems to be caused by the multiplication, and it gets much larger with exp(). 因此,差异似乎是由乘法引起的,而exp()会使差异变得更大。 And my original data matrix is larger than this. 而且我的原始数据矩阵比这大。 I get much larger values of omega: 我得到了更大的欧米茄值:

norm(reg - p_reg) = 7.0642e-12
norm(omega - p_omega) = 1.2167e+250

This also causes that in some cases sumomega goes to inf or zero in Python but not in Matlab, so the classifier outputs differ. 这也导致在某些情况下,sumomega在Python中变为inf或为零,而在Matlab中却未变为0,因此分类器输出有所不同。

What am I missing here? 我在这里想念什么? How can I fix to get the exact same results? 如何解决以获得完全相同的结果?

The difference looks like numerical precision to me. 对我来说,差异看起来像数值精度。 With floating-point operations, the order of operations matter. 对于浮点运算,运算顺序很重要。 You get (slightly) different results when reordering operations because the rounding happens differently. 对操作进行重新排序时,您得到(略)不同的结果,因为四舍五入的发生方式不同。

It is likely that Python and MATLAB implement matrix multiplication slightly differently, and therefore you should not expect exactly the same results. Python和MATLAB实现矩阵乘法的方式可能略有不同,因此,您不应期望获得完全相同的结果。

If you need to raise e to the power of the result of this multiplication, you are going to produce a result with a higher degree of imprecision. 如果需要将e乘以该乘积的结果的乘方,则将产生具有更高不精确度的结果。 This is just the nature of floating-point arithmetic. 这只是浮点运算的本质。

The issue here is not that you don't get the exact same result in MATLAB and Python, the issue is that both produce imprecise results, and you are not aware of what precision you are getting. 这里的问题不是您在MATLAB和Python中没有得到完全相同的结果,而是因为它们都产生不精确的结果,并且您不知道所获得的精度。


The softmax function is known to overflow. 已知softmax函数会溢出。 The solution is to subtract the maximum input value from all input values. 解决方案是从所有输入值中减去最大输入值。 See this other question for more details. 有关更多详细信息,请参见其他问题

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM