[英]TensorFlow custom training step with different loss functions
根據TensorFlow 文檔,可以使用以下命令執行自定義訓練步驟
# Fake sample data for testing
x_batch_train = tf.zeros([32, 3, 1], dtype="float32")
y_batch_train = tf.zeros([32], dtype="float32")
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
但是如果我想使用不同的損失函數,比如分類交叉熵,我需要對梯度磁帶中創建的 logits 進行 argmax:
loss_fn = tf.keras.lossees.get("categorical_crossentropy")
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
prediction = tf.cast(tf.argmax(logits, axis=-1), y_batch_train.dtype)
loss_value = loss_fn(y_batch_train, prediction)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
問題在於tf.argmax
函數是不可微的,所以 TensorFlow 將無法計算梯度,你會得到錯誤:
ValueError: No gradients provided for any variable: [...]
我的問題:如果不改變損失函數,我怎么能讓第二個例子工作?
categorical_crossentropy 期望你的標簽是一個熱編碼,所以你應該首先確保這一點。 然后直接傳遞你的模型的結果,這個輸出應該是每個類別一個概率更多信息-> https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy#standalone_usage
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.