简体   繁体   English

如何在 PyTorch 中使用梯度下降来改变输入?

[英]How do I mutate the input using gradient descent in PyTorch?

I'm new to PyTorch.我是 PyTorch 的新手。 I learned it uses autograd to automatically calculate the gradients for the gradient descent function.我了解到它使用autograd自动计算梯度下降函数的梯度。

Instead of adjusting the weights, I would like to mutate the input to achieve a desired output, using gradient descent.我不想调整权重,而是想使用梯度下降来改变输入以获得所需的输出。 So, instead of the weights of neurons changing, I want to keep all of the weights the same and just change the input to minimize the loss.因此,我不想改变神经元的权重,而是希望保持所有权重不变,只更改输入以最小化损失。

For example.例如。 The network is a trained image classifier with the numbers 0-9.该网络是经过训练的图像分类器,数字为 0-9。 I input random noise, and I want to morph it so that the network considers it a 3 with 60% confidence.我输入随机噪声,我想对其进行变形,以便网络以 60% 的置信度将其视为3 I would like to utilize gradient descent to adjust the values of the input (originally noise) until the network considers the input to be a 3 , with 60% confidence.我想利用梯度下降来调整输入的值(最初是噪声),直到网络认为输入是3 ,置信度为 60%。

Is there a way to do this?有没有办法做到这一点?

I assume you know how to do regular training with gradient descent.我假设您知道如何使用梯度下降进行常规训练。 You only need to change the parameters to be optimized by the optimizer.您只需要更改优化器要优化的参数。 Something like就像是

# ... Setup your network, load the input
# ...

# Set proper requires_grad -> We train the input, not the parameters
input.requires_grad = True
for p in network.parameters():
    p.requires_grad = False

# Setup the optimizer
# Previously we should have SomeOptimizer(net.parameters())
optim = SomeOptimizer([input])

output_that_you_want = ...
actual_output = net(input)
some_loss = SomeLossFunction(output_that_you_want, actual_output)
# ...
# Back-prop and optim.step() as usual

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM