简体   繁体   English

Python:矩阵函数的梯度

[英]Python: Gradient of matrix function

I want to calculate the gradient of the following function h(x) = 0.5 xT * A * x + bT + x. 我想计算以下函数的梯度h(x)= 0.5 xT * A * x + bT + x。

For now I set A to be just a (2,2) Matrix. 现在,我将A设置为(2,2)矩阵。

def function(x):
    return 0.5 * np.dot(np.dot(np.transpose(x), A), x) + np.dot(np.transpose(b), x)

where 哪里

A = A = np.zeros((2, 2))
n = A.shape[0]
A[range(n), range(n)] = 1

a (2,2) Matrix with main diagonal of 1 and 主对角线为1且(2,2)的矩阵

b = np.ones(2) 

For a given Point x = (1,1) numpy.gradient returns an empty list. 对于给定的点x =(1,1)numpy.gradient返回一个空列表。

x = np.ones(2)  
result = np.gradient(function(x))

However shouldn't I get something like that: grad(f((1,1)) = (x1 + 1, x2 + 1) = (2, 2). 但是我不应该得到这样的东西:grad(f((1,1))=(x1 + 1,x2 + 1)=(2,2)。

Appreciate any help. 感谢任何帮助。

It seems like you want to perform symbolic differentiation or automatic differentiation which np.gradient does not do. 似乎您要执行符号区分或np.gradient不执行的自动区分。 sympy is a package for symbolic math and autograd is a package for automatic differentiation for numpy. sympy是用于符号数学的软件包,而autograd是用于对numpy进行自动微分的软件包。 For example, to do this with autograd : 例如,使用autograd可以做到这autograd

import autograd.numpy as np
from autograd import grad

def function(x):
    return 0.5 * np.dot(np.dot(np.transpose(x), A), x) + np.dot(np.transpose(b), x)

A = A = np.zeros((2, 2))
n = A.shape[0]
A[range(n), range(n)] = 1
b = np.ones(2)
x = np.ones(2)
grad(function)(x)

Outputs: 输出:

array([2., 2.])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM