[英]tensorflow 2.0: tf.GradientTape().gradient() returns None
I designed my own loss function for my graduate research, it calculates the distance between the histogram of losses and normal distribution. 我为自己的研究生研究设计了自己的损失函数,它计算损失的直方图与正态分布之间的距离。 I am implementing this loss function in the setting of Tensorflow 2.0 tutorial about Iris flower classification. 我正在有关虹膜花分类的Tensorflow 2.0 教程的设置中实现此损失功能。
I checked my loss value and type, they are same as the one in tutorial, but the grads
from my tape.gradient()
is None
. 我检查了我的损耗值和类型,它们是一样的一个教程中,但grads
从我tape.gradient()
是None
。
This is done in Google Colab with: 这是通过以下方式在Google Colab中完成的:
TensorFlow version: 2.0.0-beta1
Eager execution: True
My code block for loss and gradient: 我的损失和梯度代码块:
def loss(model, x, y):
y_ = model(x) # y_.shape is (batch_size, 3)
losses = []
for i in range(y.shape[0]):
loss = loss_object(y_true=y[i], y_pred=y_[i])
losses.append(float(loss))
dis = get_distance_between_samples_and_distribution(losses, if_plot = 0)
return tf.convert_to_tensor(dis, dtype=np.float32)
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
tape.watch(model.trainable_variables)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
loss_value, grads = grad(model, features, labels)
print("loss_value:",loss_value)
print("type(loss_value):", type(loss_value))
print("grads:", grads)
################################################# Output:
loss_value: tf.Tensor(0.21066944, shape=(), dtype=float32)
type(loss_value): <class 'tensorflow.python.framework.ops.EagerTensor'>
grads: [None, None, None, None, None, None]
The code in tutorial is: 教程中的代码是:
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
tape.watch(model.trainable_variables)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
loss_value, grads = grad(model, features, labels)
print("loss_value:",loss_value)
print("type(loss_value):", type(loss_value))
print("grads:", grads)
################################################# Output:
loss_value: tf.Tensor(0.56536925, shape=(), dtype=float32)
type(loss_value): <class 'tensorflow.python.framework.ops.EagerTensor'>
grads: [<tf.Tensor: id=9962, shape=(4, 10), dtype=float32, numpy=
array([[ 0.0000000e+00, 6.5984917e-01, 3.0700830e-01, -7.5234145e-01,
......
I feel the calculation of my self defined loss should not matter since the data type and shape are same, but in case it does, here is my loss function: 我觉得自定义损失的计算应该无关紧要,因为数据类型和形状相同,但是如果确实如此,这是我的损失函数:
def get_distance_between_samples_and_distribution(errors, if_plot = 1, n_bins = 5):
def get_middle(x):
xMid = np.zeros(x.shape[0]//2)
for i in range(xMid.shape[0]):
xMid[i] = 0.5*(x[2*i]+x[2*i+1])
return xMid
bins, edges = np.histogram(errors, n_bins, normed=1)
left,right = edges[:-1],edges[1:]
X = np.array([left,right]).T.flatten()
Y = np.array([bins,bins]).T.flatten()
X_middle = get_middle(X)
Y_middle = get_middle(Y)
distance = []
for i in range(X_middle.shape[0]):
dis = np.abs(scipy.stats.norm.pdf(X_middle[i])- Y_middle[i])
distance.append(dis)
distance2 = np.power(distance, 2)
return sum(distance2)/len(distance2)
I searched and tried adding tape.watch()
, checking the indentation of my return, but they did not fix this None
problem. 我搜索并尝试添加tape.watch()
,检查返回的缩进,但他们没有解决此None
问题。 I will very appreciate any suggestion for fixing this. 对于解决此问题的任何建议,我将非常感谢。 Thanks! 谢谢!
The definition of class tf.GradientTape
is here tf.GradientTape
类的定义在这里
原因是我的损失函数不可微,我对两个分布的相似性使用了另一种度量,现在可以了。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.