[英]Learning multivariate normal covariance matrix using pytorch
I am trying to learn a multivariate normal covariance matrix (Sigma, ∑) using some observations.我正在尝试使用一些观察来学习多元正态协方差矩阵(Sigma,∑)。
The way I went at it is by using pytorch.distributions.MultivariateNormal:我采用的方法是使用 pytorch.distributions.MultivariateNormal:
import torch
from torch.distributions import MultivariateNormal
# I tried both the scale_tril parameter and the covariance parameter.
mvn = MultivariateNormal(loc=torch.tensor([0.0, 0.0], requires_grad=False).view(1,2),
scale_tril=torch.tensor([[1.0 , 0.0], [0.0, 1.0]],
requires_grad=True).view(-1, 2, 2))
loss = -mvn.log_prob(torch.ones((1, 2))).mean()
loss.backward()
print(mvn.loc.grad)
I get None.我没有。 I tried fiddling with the dimensions of the both the loc and the scale_tril parameters.
我尝试摆弄 loc 和 scale_tril 参数的尺寸。 Nothing appears to work.
似乎没有任何效果。 Any ideas?
有任何想法吗?
Bests, Eyal.最好的,埃亚尔。
You are not calling.grad on your leaf nodes (on .view
rather than tensor itself), also you have requires_grad=False
on a mean, lets make things more explicit你没有在你的叶子节点上调用 .grad (在
.view
而不是张量本身),你也有requires_grad=False
的意思,让事情更明确
import torch
from torch.distributions import MultivariateNormal
mean = torch.tensor([0.0, 0.0], requires_grad=True)
cov = torch.tensor([[1.0 , 0.0], [0.0, 1.0]], requires_grad=True)
mvn = MultivariateNormal(loc=mean.view(1,2),
scale_tril=cov.view(-1, 2, 2))
loss = -mvn.log_prob(torch.ones((1, 2))).mean()
loss.backward()
print(mean.grad)
print(cov.grad)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.