简体   繁体   English

Keras:计算模型输出与输入返回的导数[无]

[英]Keras: calculating derivatives of model output wrt input returns [None]

I need help with calculating derivatives for model output wrt inputs in Keras. 我需要有关在Keras中为模型输出wrt输入计算导数的帮助。

I want to add a regularization functional to the loss function. 我想向损失函数添加一个正则化函数。 The regularizer contains the derivative of the classifier function. 正则化器包含分类器函数的导数。 So I tried to take the derivative of model output. 因此,我尝试采用模型输出的导数。 The model is a MLP with one hidden layer. 该模型是具有一个隐藏层的MLP。 The dataset is MNIST. 数据集是MNIST。 When I compile the model and take the derivative, I get [None] as the result instead of the derivative function. 当我编译模型并采用导数时,得到的结果为[None],而不是导数函数。

I have seen a similar post, but didn't get answer there either: Taking derivative of Keras model wrt to inputs is returning all zeros 我看到过类似的帖子,但也没有得到答案: 将Keras模型wrt的导数作为输入将返回全零。

Here is my code. 这是我的代码。 Please help me to solve the problem. 请帮我解决问题。

import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K

num_hiddenNodes = 1024
num_classes = 10

(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(-1, 28 * 28)
X_train = X_train.astype('float32')
X_train /= 255
y_train = keras.utils.to_categorical(y_train, num_classes)

model = Sequential()
model.add(Dense(num_hiddenNodes, activation='softplus', input_shape=(784,)))
model.add(Dense(num_classes, activation='softmax'))

# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
logits = model.output
# logits = model.layers[-1].output
print(logits)
X = K.identity(X_train)
# X = tf.placeholder(dtype=tf.float32, shape=(None, 784))
print(X)
print(K.gradients(logits, X))

Here is the output for the code. 这是代码的输出。 The two parameters are Tensors. 这两个参数是张量。 The gradients function returns None. 渐变函数返回无。

Tensor("dense_2/Softmax:0", shape=(?, 10), dtype=float32)
Tensor("Identity:0", shape=(60000, 784), dtype=float32)
[None]

You are computing the gradients respect to X_train, which is not an input variable to the computation graph. 您正在计算相对于X_train的梯度,它不是计算图的输入变量。 Instead you need to get the symbolic input tensor to the model, so try something like: 相反,您需要获取模型的符号输入张量,因此请尝试以下操作:

grads = K.gradients(model.output, model.input)

In order to calculate the gradients, you need to first figure out the trainable variables. 为了计算梯度,您需要首先弄清楚可训练变量。 Here is how you do it: 这是您的操作方式:

outputs = model.output
trainable_variables = model.trainable_weights

Now calculate the gradients as: 现在将梯度计算为:

gradients = K.gradients(outputs, trainable_variables)

As a side note, the gradients are a part of your computational graph, the execution of which depends on your backend. 附带说明一下, gradients是计算图的一部分,其执行取决于您的后端。 If you are using tf , you might require to initialize a session and pass the gradients variable to the session for its evaluation. 如果您使用的是tf ,则可能需要初始化会话并将gradients变量传递给该会话以进行评估。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM