簡體   English   中英

如何使用 autograd.grad 計算 PyTorch 中的參數損失的 Hessian

[英]How to compute Hessian of the loss w.r.t. the parameters in PyTorch using autograd.grad

我知道在 pytorch 中有很多關於“計算 Hessian”的內容,但據我所知,我沒有找到任何對我有用的東西。 因此,盡量准確地說,我想要的 Hessian 是相對於網絡參數的損失梯度的雅可比行列式。 也稱為關於參數的二階導數矩陣。

我發現了一些以直觀方式工作的代碼,雖然不應該很快。 很明顯,它只計算參數與參數之間的損失梯度的梯度,並且一次只計算一個元素(梯度的)。 我認為邏輯絕對是正確的,但我遇到了一個錯誤,與requires_grad 我是 pytorch 初學者,所以這可能是一件簡單的事情,但錯誤似乎是說它不能采用env_grads變量的梯度,這是前一個 grad 函數調用的輸出。

對此的任何幫助將不勝感激。 這是代碼后跟錯誤消息。 我還打印了env_grads[0]變量,所以我們可以看到它實際上是一個張量,這是前一個grad調用的正確輸出。

env_loss = loss_fn(env_outputs, env_targets)
total_loss += env_loss
env_grads = torch.autograd.grad(env_loss, params,retain_graph=True)

print( env_grads[0] )
hess_params = torch.zeros_like(env_grads[0])
for i in range(env_grads[0].size(0)):
    for j in range(env_grads[0].size(1)):
        hess_params[i, j] = torch.autograd.grad(env_grads[0][i][j], params, retain_graph=True)[0][i, j] #  <--- error here
print( hess_params )
exit()

輸出:

tensor([[-6.4064e-03, -3.1738e-03,  1.7128e-02,  8.0391e-03],
        [ 7.1698e-03, -2.4640e-03, -2.2769e-03, -1.0687e-03],
        [-3.0390e-04, -2.4273e-03, -4.0799e-02, -1.9149e-02],
        ...,
        [ 1.1258e-02, -2.5911e-05, -9.8133e-02, -4.6059e-02],
        [ 8.1502e-04, -2.5814e-03,  4.1772e-02,  1.9606e-02],
        [-1.0075e-02,  6.6072e-03,  8.3118e-04,  3.9011e-04]], device='cuda:0')

錯誤:

Traceback (most recent call last):
  File "/home/jefferythewind/anaconda3/envs/rapids3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/jefferythewind/anaconda3/envs/rapids3/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/run_synthetic.py", line 258, in <module>
    main(args)
  File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/run_synthetic.py", line 245, in main
    deep_mask=args.deep_mask
  File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/run_synthetic.py", line 103, in train
    scale_grad_inverse_sparsity=scale_grad_inverse_sparsity
  File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/and_mask_utils.py", line 154, in get_grads_deep
    hess_params[i, j] = torch.autograd.grad(env_grads[0][i][j], params, retain_graph=True)[0][i, j]
  File "/home/jefferythewind/anaconda3/envs/rapids3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 157, in grad
    inputs, allow_unused)
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

PyTorch 最近向torch.autograd添加了一個功能性更高級別的 API ,它提供了torch.autograd.functional.hessian(func, inputs, ... )來直接評估標量函數func相對於其某個位置的參數的 hessian由inputs指定,一個對應於func參數的張量元組。 我相信hessian本身不支持自動微分。

但請注意,截至 2021 年 3 月,它仍處於測試階段。


使用torch.autograd.functional.hessian為非零均值創建分數測試的完整示例(作為一個樣本 t 檢驗的(壞)替代方案):

import numpy as np
import torch, torchvision
from torch.autograd import Variable, grad
import torch.distributions as td
import math
from torch.optim import Adam
import scipy.stats


x_data = torch.randn(100)+0.0 # observed data (here sampled under H0)

N = x_data.shape[0] # number of observations

mu_null = torch.zeros(1)
sigma_null_hat = Variable(torch.ones(1), requires_grad=True)

def log_lik(mu, sigma):
  return td.Normal(loc=mu, scale=sigma).log_prob(x_data).sum()

# Find theta_null_hat by some gradient descent algorithm (in this case an closed-form expression would be trivial to obtain (see below)):
opt = Adam([sigma_null_hat], lr=0.01)
for epoch in range(2000):
    opt.zero_grad() # reset gradient accumulator or optimizer
    loss = - log_lik(mu_null, sigma_null_hat) # compute log likelihood with current value of sigma_null_hat  (= Forward pass)
    loss.backward() # compute gradients (= Backward pass)
    opt.step()      # update sigma_null_hat
    
print(f'parameter fitted under null: sigma: {sigma_null_hat}, expected: {torch.sqrt((x_data**2).mean())}')

theta_null_hat = (mu_null, sigma_null_hat)

U = torch.tensor(torch.autograd.functional.jacobian(log_lik, theta_null_hat)) # Jacobian (= vector of partial derivatives of log likelihood w.r.t. the parameters (of the full/alternative model)) = score
I = -torch.tensor(torch.autograd.functional.hessian(log_lik, theta_null_hat)) / N # estimate of the Fisher information matrix
S = torch.t(U) @ torch.inverse(I) @ U / N # test statistic, often named "LM" (as in Lagrange multiplier), would be zero at the maximum likelihood estimate

pval_score_test = 1 - scipy.stats.chi2(df = 1).cdf(S) # S asymptocially follows a chi^2 distribution with degrees of freedom equal to the number of parameters fixed under H0
print(f'p-value Chi^2-based score test: {pval_score_test}')
#parameter fitted under null: sigma: tensor([0.9260], requires_grad=True), expected: 0.9259940385818481

# comparison with Student's t-test:
pval_t_test = scipy.stats.ttest_1samp(x_data, popmean = 0).pvalue
#> p-value Chi^2-based score test: 0.9203232752568568
print(f'p-value Student\'s t-test: {pval_t_test}')
#> p-value Student's t-test: 0.9209265268946605

不要介意我在這里回答我自己的問題。 在 PyTorch 站點的這個線程中找到了我需要的提示,大約一半。

那行不通,符合我提到的第三點。 您需要使用可微操作創建計算圖,這將創建一個具有有效 grad_fn 的結果張量。

我注意到有一個說法autograd.grad稱為create_graph所以我設置為True在第一次調用grad ,以及截至解決錯誤。

修改后的工作代碼:

env_loss = loss_fn(env_outputs, env_targets)
total_loss += env_loss
env_grads = torch.autograd.grad(env_loss, params, retain_graph=True, create_graph=True)

print( env_grads[0] )
hess_params = torch.zeros_like(env_grads[0])
for i in range(env_grads[0].size(0)):
    for j in range(env_grads[0].size(1)):
        hess_params[i, j] = torch.autograd.grad(env_grads[0][i][j], params, retain_graph=True)[0][i, j] #  <--- error here
print( hess_params )
exit()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM