简体   繁体   English

解决numpy / scipy中的3D最小二乘

[英]Solve 3D least squares in numpy/scipy

For some integer K around 100, I have 2 * K (n, n) arrays: X_1, ..., X_K and Y_1, ..., Y_K . 对于大约100的整数K,我有2 * K (n, n)数组: X_1, ..., X_KY_1, ..., Y_K

I would like to perform K least squares simultaneously, ie find the n by n matrix A minimizing the sum of squares over k: \\sum_k norm(Y_k - A.dot(X_k), ord='fro') ** 2 ( A must not depend on k ). 我想同时执行K个最小二乘法,即找到n×n矩阵A将k上的平方和最小化: \\sum_k norm(Y_k - A.dot(X_k), ord='fro') ** 2A一定不依赖于k )。

I am looking for an easy way to do this with numpy or scipy. 我正在寻找一种使用numpy或scipy的简单方法。 I know the function I want to minimize is a quadratic form in A so I could do it by hand, but I'm looking for an off-the-shelf way of doing it. 我知道我要最小化的函数是A中的二次形式,所以我可以手工完成,但是我正在寻找一种现成的方法。 Is there one? 有一个吗?

Something like this works if n is a small number. 如果n是一个小数,则类似的方法会起作用。

import numpy as np
from scipy.optimize import minimize

K = 5
n = 10

X = np.random.random_sample((K, n, n))
Y = np.random.random_sample((K, n, n))

def opt(A):

    A = np.reshape(A, (n, n))

    # maybe need to transpose X.dot(a) ?
    # if axis is a 2-tuple, it specifies the axes that hold 2-D matrices, 
    # and the matrix norms of these matrices are computed.
    return np.sum(np.linalg.norm(Y - X.dot(A), ord='fro', axis=(1, 2)) ** 2.0)

A_init = np.random.random_sample((n, n))
print(minimize(opt, A_init ))

Careful: The optimization algorithms used by minimize as default are local. 注意:默认情况下, minimize使用的优化算法是局部的。

I can't help with the python, but here is the mathematical solution, in case it helps. 我帮不上python,但这是数学解决方案,以防万一。 We seek to minimise 我们寻求最小化

E = Sum { Tr (Y[j]-A*X[j])*(Y[j]-A*X[j])'}

Some algebra yields 一些代数的产生

E = Tr(P-A*Q'-Q*A'+A*R*A')
where
P = Sum{ Y[j]*Y[j]'}
Q = Sum{ Y[j]*X[j]'}
R = Sum{ X[j]*X[j]'}

If R is invertible a little more algebra yields 如果R是可逆的,则代数的产生更多

E = Tr( (A-Q*S)*R*(A-Q*S)') + Tr( P - Q*S*Q')
where S = inv( R)

Since 以来

(A-Q*S)*R*(A-Q*S)' is positive definite, 

we minimise E by taking A = Q*S. 我们通过取A = Q * S来最小化E。

In this case an algorithm would be: 在这种情况下,算法为:

compute Q
compute R
solve A*R = Q for A (eg by finding the cholesky factors of R)

If R is not invertible, we should use the generalised inverse for S instead of the plain inverse. 如果R是不可逆的,我们应该对S使用广义逆而不是普通逆。

In fact the answer was simple, I just needed to create bigger matrices Y and X by horizontally stacking the Y_k (to create Y) and the X_k (to create X). 实际上,答案很简单,我只需要通过水平堆叠Y_k(创建Y)和X_k(创建X)来创建更大的矩阵Y和X。 Then I can just solve a regular 2d least squares problem: minimize norm(Y - A.dot(X)) 然后,我可以解决一个常规的二维最小二乘问题:最小化norm(Y - A.dot(X))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM