簡體   English   中英

tensorflow 2.0:tf.GradientTape()。gradient()返回None

[英]tensorflow 2.0: tf.GradientTape().gradient() returns None

我為自己的研究生研究設計了自己的損失函數,它計算損失的直方圖與正態分布之間的距離。 我正在有關虹膜花分類的Tensorflow 2.0 教程的設置中實現此損失功能。

我檢查了我的損耗值和類型,它們是一樣的一個教程中,但grads從我tape.gradient()None

這是通過以下方式在Google Colab中完成的:

TensorFlow version: 2.0.0-beta1

Eager execution: True

我的損失和梯度代碼塊:

def loss(model, x, y):
  y_ = model(x) # y_.shape is (batch_size, 3)
  losses = []
  for i in range(y.shape[0]):
    loss = loss_object(y_true=y[i], y_pred=y_[i])
    losses.append(float(loss))
  dis = get_distance_between_samples_and_distribution(losses, if_plot = 0)
  return tf.convert_to_tensor(dis, dtype=np.float32)

def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets)
    tape.watch(model.trainable_variables)
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

loss_value, grads = grad(model, features, labels)
print("loss_value:",loss_value)
print("type(loss_value):", type(loss_value))
print("grads:", grads)
################################################# Output:
loss_value: tf.Tensor(0.21066944, shape=(), dtype=float32)
type(loss_value): <class 'tensorflow.python.framework.ops.EagerTensor'>
grads: [None, None, None, None, None, None]

教程中的代碼是:

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

def loss(model, x, y):
  y_ = model(x)
  return loss_object(y_true=y, y_pred=y_)

def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets)
    tape.watch(model.trainable_variables)
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

loss_value, grads = grad(model, features, labels)
print("loss_value:",loss_value)
print("type(loss_value):", type(loss_value))
print("grads:", grads)
################################################# Output:
loss_value: tf.Tensor(0.56536925, shape=(), dtype=float32)
type(loss_value): <class 'tensorflow.python.framework.ops.EagerTensor'>
grads: [<tf.Tensor: id=9962, shape=(4, 10), dtype=float32, numpy=
array([[ 0.0000000e+00,  6.5984917e-01,  3.0700830e-01, -7.5234145e-01,
      ......

我覺得自定義損失的計算應該無關緊要,因為數據類型和形狀相同,但是如果確實如此,這是我的損失函數:

def get_distance_between_samples_and_distribution(errors, if_plot = 1, n_bins = 5):
  def get_middle(x):
    xMid = np.zeros(x.shape[0]//2)
    for i in range(xMid.shape[0]):
      xMid[i] = 0.5*(x[2*i]+x[2*i+1])
    return xMid

  bins, edges = np.histogram(errors, n_bins, normed=1)
  left,right = edges[:-1],edges[1:]
  X = np.array([left,right]).T.flatten()
  Y = np.array([bins,bins]).T.flatten()
  X_middle = get_middle(X)
  Y_middle = get_middle(Y)
  distance = []
  for i in range(X_middle.shape[0]):
    dis = np.abs(scipy.stats.norm.pdf(X_middle[i])- Y_middle[i])
    distance.append(dis)
  distance2 = np.power(distance, 2)

  return sum(distance2)/len(distance2)

我搜索並嘗試添加tape.watch() ,檢查返回的縮進,但他們沒有解決此None問題。 對於解決此問題的任何建議,我將非常感謝。 謝謝!

tf.GradientTape類的定義在這里

原因是我的損失函數不可微,我對兩個分布的相似性使用了另一種度量,現在可以了。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM