简体   繁体   English

MATLAB矩阵功率算法

[英]MATLAB matrix power algorithm

I'm looking to port an algorithm from MATLAB to Python. 我正在寻找从MATLAB到Python的算法。 One step in said algorithm involves taking A^(-1/2) where A is a 9x9 square complex matrix. 所述算法中的一个步骤涉及取A^(-1/2) ,其中A是9×9方形复矩阵。 As I understand it, the square root of matrices (and by extension their inverses) are not-unique. 据我所知,矩阵的平方根(以及它们的反转)不是唯一的。

I've been experimenting with scipy.linalg.fractional_matrix_power and an approximation using A^(-1/2) = exp((-1/2)*log(A)) with numpy's built in expm and logm functions. 我一直在尝试使用scipy.linalg.fractional_matrix_power和一个近似值,使用A^(-1/2) = exp((-1/2)*log(A))和numpy内置的expmlogm函数。 The former is exceptionally poor and only provides 3 decimal places of precision whereas the latter is decently correct for elements in the top left corner but gets progressively worse as you move down and to the right. 前者非常差,只能提供3位小数的精度,而后者对于左上角的元素来说是正确的,但随着向下和向右移动会逐渐变差。 This may or may not be a perfectly valid mathematical solution to the expression however it doesn't suffice for this application. 这可能是也可能不是表达式的完全有效的数学解决方案,但是这对于该应用来说是不够的。

As a result, I'm looking to directly implement MATLAB's matrix power algorithm in Python so that I can 100% confirm the same result each time. 因此,我希望在Python中直接实现MATLAB的矩阵功率算法,这样我每次都可以100%确认相同的结果。 Does anyone have any insight or documentation on how this would work? 有没有人有任何关于这将如何工作的见解或文件? The more parallelizable this algorithm is, the better, as eventually the goal would be to rewrite it in OpenCL for GPU acceleration. 这种算法越可并行化越好,最终目标是在OpenCL中重写它以进行GPU加速。

EDIT: An MCVE as requested: 编辑:按要求提供MCVE:

[[(0.591557294607941+4.33680868994202e-19j), (-0.219707725574605-0.35810724986609j), (-0.121305654177909+0.244558388829046j), (0.155552026648172-0.0180264818714123j), (-0.0537690384136066-0.0630740244116577j), (-0.0107526931263697+0.0397896274845627j), (0.0182892503609312-0.00653264433724856j), (-0.00710188853532244-0.0050445035279044j), (-2.20414002823034e-05+0.00373184532662288j)], [(-0.219707725574605+0.35810724986609j), (0.312038814492119+2.16840434497101e-19j), (-0.109433401402399-0.174379997015402j), (-0.0503362231078033+0.108510948023091j), (0.0631826956936223-0.00992931123813742j), (-0.0219902325360141-0.0233215237172002j), (-0.00314837555001163+0.0148621558916679j), (0.00630295247506065-0.00266790359447072j), (-0.00249343102520442-0.00156160619280611j)], [(-0.121305654177909-0.244558388829046j), (-0.109433401402399+0.174379997015402j), (0.136649392858215-1.76182853028894e-19j), (-0.0434623984527311-0.0669251299161109j), (-0.0168737559719828+0.0393768358149159j), (0.0211288536117387-0.00417146769324491j), (-0.00734306979471257-0.00712443264825166j), (-0.000742681625102133+0.00455752452374196j), (0.00179068247786595-0.000862706240042082j)], [(0.155552026648172+0.0180264818714123j), (-0.0503362231078033-0.108510948023091j), (-0.0434623984527311+0.0669251299161109j), (0.0467980890488569+5.14996031930615e-19j), (-0.0140208255975664-0.0209483313237692j), (-0.00472995448413803+0.0117916398375124j), (0.00589653974090387-0.00134198920550751j), (-0.00202109265416585-0.00184021636458858j), (-0.000150793859056431+0.00116822322464066j)], [(-0.0537690384136066+0.0630740244116577j), (0.0631826956936223+0.00992931123813742j), (-0.0168737559719828-0.0393768358149159j), (-0.0140208255975664+0.0209483313237692j), (0.0136137125669776-2.03287907341032e-20j), (-0.00387854073283377-0.0056769786724813j), (-0.0011741038702424+0.00306007798625676j), (0.00144000687517355-0.000355251914809693j), (-0.000481433965262789-0.00042129815655098j)], [(-0.0107526931263697-0.0397896274845627j), (-0.0219902325360141+0.0233215237172002j), (0.0211288536117387+0.00417146769324491j), (-0.00472995448413803-0.0117916398375124j), (-0.00387854073283377+0.0056769786724813j), (0.00347771689075251+8.21621958836671e-20j), (-0.000944046302699304-0.00136521328407881j), (-0.00026318475762475+0.000704212317211994j), (0.00031422288569727-8.10033316327328e-05j)], [(0.0182892503609312+0.00653264433724856j), (-0.00314837555001163-0.0148621558916679j), (-0.00734306979471257+0.00712443264825166j), (0.00589653974090387+0.00134198920550751j), (-0.0011741038702424-0.00306007798625676j), (-0.000944046302699304+0.00136521328407881j), (0.000792908166233942-7.41153828847513e-21j), (-0.00020531962049495-0.000294952695922854j), (-5.36226164765808e-05+0.000145645628243286j)], [(-0.00710188853532244+0.00504450352790439j), (0.00630295247506065+0.00266790359447072j), (-0.000742681625102133-0.00455752452374196j), (-0.00202109265416585+0.00184021636458858j), (0.00144000687517355+0.000355251914809693j), (-0.00026318475762475-0.000704212317211994j), (-0.00020531962049495+0.000294952695922854j), (0.000162971629601464-5.39321759384574e-22j), (-4.03304806590714e-05-5.77159110863666e-05j)], [(-2.20414002823034e-05-0.00373184532662288j), (-0.00249343102520442+0.00156160619280611j), (0.00179068247786595+0.000862706240042082j), (-0.000150793859056431-0.00116822322464066j), (-0.000481433965262789+0.00042129815655098j), (0.00031422288569727+8.10033316327328e-05j), (-5.36226164765808e-05-0.000145645628243286j), (-4.03304806590714e-05+5.77159110863666e-05j), (3.04302590501313e-05-4.10281583826302e-22j)]]

I can think of two explanations, in both cases I accuse user error. 我可以想到两种解释,在这两种情况下我都指责用户错误。 In chronological order: 按年代顺序:

Theory #1 (the subtle one) 理论#1(微妙的)

My suspicion is that you're copying the printed values of the input matrix from one code as input into the other. 我怀疑你是将输入矩阵的打印值从一个代码复制到另一个代码中。 Ie you're throwing away double precision when you switch codes, which gets amplified during the inverse-square-root calculation. 也就是说,当你切换代码时,你会丢掉双精度,在反平方根计算过程中会被放大。

As proof, I compared MATLAB's inverse square root with the very function you're using in python. 作为证明,我将MATLAB的反平方根与您在python中使用的函数进行了比较。 I will show a 3x3 example due to size considerations, but—spoiler warning—I did the same with a 9x9 random matrix and got two results with condition number 11.245754109790719 (MATLAB) and 11.245754109790818 (numpy). 由于尺寸考虑,我将展示3x3示例,但是 - 扰流警告 - 我使用9x9随机矩阵做了同样的事情,得到了条件编号11.245754109790719 (MATLAB)和11.245754109790818 (numpy)的两个结果。 That should tell you something about the similarity of the results without having to save and load the actual matrices between the two codes. 这应该告诉你一些关于结果的相似性,而不必保存和加载两个代码之间的实际矩阵。 I suggest you do this though: keywords are scipy.io.loadmat and savemat . 我建议你这样做:关键字是scipy.io.loadmatsavemat

What I did was generate the random data in python (because that's what I prefer): 我所做的是在python中生成随机数据(因为这是我更喜欢的):

>>> import numpy as np
>>> print((np.random.rand(3,3) + 1j*np.random.rand(3,3)).tolist())
[[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)], [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404131303+0.6480559739625379j)], [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]]

By copying the same truncated output into both codes, I guarantee the correspondence of the inputs. 通过将相同的截断输出复制到两个代码中,我保证了输入的对应关系。

Example in MATLAB: MATLAB中的示例:

>> M = [[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)]; [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404131303+0.6480559739625379j)]; [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]];
>> A = M^(-0.5);
>> format long
>> disp(A)
  0.922112307438377 + 0.919346397931976i  0.108620882045523 - 0.649850434897895i -0.778737740194425 - 0.320654127149988i
 -0.423384022626231 - 0.842737730824859i  0.592015668030645 + 0.661682656423866i  0.529361991464903 - 0.388343838121371i
 -0.550789874427422 + 0.021129515921025i  0.472026152514446 - 0.502143106675176i  0.942976466768961 + 0.141839849623673i

>> cond(A)

ans =

   3.429368520364765

Example in python: python中的示例:

>>> M = [[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)], [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404
... 131303+0.6480559739625379j)], [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]]

>>> A = fractional_matrix_power(M,-0.5)

>>> print(A)
[[ 0.92211231+0.9193464j   0.10862088-0.64985043j -0.77873774-0.32065413j]
 [-0.42338402-0.84273773j  0.59201567+0.66168266j  0.52936199-0.38834384j]
 [-0.55078987+0.02112952j  0.47202615-0.50214311j  0.94297647+0.14183985j]]

>>> np.linalg.cond(A)
3.4293685203647408

My suspicion is that if you scipy.io.loadmat the matrix into python, do the calculation, scipy.io.savemat the result and load it back in with MATLAB, you'll see less than 1e-12 absolute error (hopefully even less) between the results. 我怀疑如果你scipy.io.loadmat矩阵到python,做计算, scipy.io.savemat结果并用MATLAB加载它,你会看到不到1e-12绝对误差(希望甚至更少)结果之间。


Theory #2 (the facepalm one) 理论#2(facepalm one)

My suspicion is that you're using python 2, and your -1/2 -powered division is a simple inverse: 我怀疑你是在使用python 2,你的-1/2动力分区是一个简单的逆:

>>> # python 3 below
>>> # python 3's // is python 2's /, i.e. integer division
>>> 1/2
0.5
>>> 1//2
0
>>> -1/2
-0.5
>>> -1//2
-1

So if you're using python 2, then calling 所以,如果您使用的是python 2,那么请致电

fractional_matrix_power(M,-1/2)

is actually the inverse of M . 实际上是M的倒数。 The obvious solution is to switch to python 3. The less obvious solution is to keep using python 2 (which you shouldn't, as the above exemplifies), but use 显而易见的解决方案是切换到python 3.不太明显的解决方案是继续使用python 2(你不应该,如上面的例子),但使用

from __future__ import division

on top of your every source file. 在每个源文件的顶部。 This will override the behaviour of the simple / division operator so that it reflects the python 3 version, and you will have one less headache. 这将覆盖simple / division运算符的行为,以便它反映python 3版本,并且您将减少一个头痛。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM