[英]Use additional *trainable* variables in Keras/Tensorflow custom loss function
我知道如何使用附加輸入在y_true
編寫自定義損失函數,而不是標准的y_true
、 y_pred
對,見下文。 我的問題是輸入帶有可訓練變量(其中一些)的損失函數,該變量是損失梯度的一部分,因此應該更新。
我的解決方法是:
N
X V
大小的虛擬輸入,其中N
是觀察值的數量和附加變量的V
數量Dense()
層dummy_output
以便dummy_output
跟蹤我的V
“權重”V
權重dummy_output
層使用虛擬損失函數(僅返回 0.0 和/或權重 0.0),因此我的V
“權重”僅通過我的自定義損失函數更新我的問題是:是否有更自然的 Keras/TF-like 方式來做到這一點? 因為它感覺如此做作,更不用說容易出現錯誤。
我的解決方法示例:
(是的,我知道這是一個非常愚蠢的自定義損失函數,實際上事情要復雜得多)
import numpy as np
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Dense
from tensorflow.keras.callbacks import EarlyStopping
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Input
from tensorflow.keras import Model
n_col = 10
n_row = 1000
X = np.random.normal(size=(n_row, n_col))
beta = np.arange(10)
y = X @ beta
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# my custom loss function accepting my dummy layer with 2 variables
def custom_loss_builder(dummy_layer):
def custom_loss(y_true, y_pred):
var1 = dummy_layer.trainable_weights[0][0]
var2 = dummy_layer.trainable_weights[0][1]
return var1 * K.mean(K.square(y_true-y_pred)) + var2 ** 2 # so var2 should get to zero, var1 should get to minus infinity?
return custom_loss
# my dummy loss function
def dummy_loss(y_true, y_pred):
return 0.0
# my dummy input, N X V, where V is 2 for 2 vars
dummy_x_train = np.random.normal(size=(X_train.shape[0], 2))
# model
inputs = Input(shape=(X_train.shape[1],))
dummy_input = Input(shape=(dummy_x_train.shape[1],))
hidden1 = Dense(10)(inputs) # here only 1 hidden layer in the "real" network, assume whatever network is built here
output = Dense(1)(hidden1)
dummy_output = Dense(1, use_bias=False)(dummy_input)
model = Model(inputs=[inputs, dummy_input], outputs=[output, dummy_output])
# compilation, notice zero loss for the dummy_output layer
model.compile(
loss=[custom_loss_builder(model.layers[-1]), dummy_loss],
loss_weights=[1.0, 0.0], optimizer= 'adam')
# run, notice y_train repeating for dummy_output layer, it will not be used, could have created dummy_y_train as well
history = model.fit([X_train, dummy_x_train], [y_train, y_train],
batch_size=32, epochs=100, validation_split=0.1, verbose=0,
callbacks=[EarlyStopping(monitor='val_loss', patience=5)])
無論var1
和var2
( dummy_output
層的初始化)的dummy_output
是什么,它們都分別渴望減去inf
和0
,這似乎確實有效:
(此圖來自迭代運行模型並保存這兩個權重,如下所示)
var1_list = []
var2_list = []
for i in range(100):
if i % 10 == 0:
print('step %d' % i)
model.fit([X_train, dummy_x_train], [y_train, y_train],
batch_size=32, epochs=1, validation_split=0.1, verbose=0)
var1, var2 = model.layers[-1].get_weights()[0]
var1_list.append(var1.item())
var2_list.append(var2.item())
plt.plot(var1_list, label='var1')
plt.plot(var2_list, 'r', label='var2')
plt.legend()
plt.show()
在這里回答我自己的問題,經過幾天的努力,我讓它在沒有虛擬輸入的情況下工作,我認為這要好得多,應該是“規范”的方式,直到 Keras/TF 簡化流程。 這就是 Keras/TF 文檔在這里的做法。
使用具有外部可訓練變量的損失函數的關鍵是通過使用自定義損失/輸出層,該層在call()
實現中具有self.add_loss(...)
,如下所示:
class MyLoss(Layer):
def __init__(self, var1, var2):
super(MyLoss, self).__init__()
self.var1 = K.variable(var1) # or tf.Variable(var1) etc.
self.var2 = K.variable(var2)
def get_vars(self):
return self.var1, self.var2
def custom_loss(self, y_true, y_pred):
return self.var1 * K.mean(K.square(y_true-y_pred)) + self.var2 ** 2
def call(self, y_true, y_pred):
self.add_loss(self.custom_loss(y_true, y_pred))
return y_pred
現在注意MyLoss
層需要兩個輸入,實際的y_true
和預測的y
直到那時:
inputs = Input(shape=(X_train.shape[1],))
y_input = Input(shape=(1,))
hidden1 = Dense(10)(inputs)
output = Dense(1)(hidden1)
my_loss = MyLoss(0.5, 0.5)(y_input, output) # here can also initialize those var1, var2
model = Model(inputs=[inputs, y_input], outputs=my_loss)
model.compile(optimizer= 'adam')
最后,正如 TF 文檔所提到的,在這種情況下,您不必在fit()
函數中指定loss
或y
:
history = model.fit([X_train, y_train], None,
batch_size=32, epochs=100, validation_split=0.1, verbose=0,
callbacks=[EarlyStopping(monitor='val_loss', patience=5)])
再次注意, y_train
作為輸入之一進入fit()
。
現在它的工作原理:
var1_list = []
var2_list = []
for i in range(100):
if i % 10 == 0:
print('step %d' % i)
model.fit([X_train, y_train], None,
batch_size=32, epochs=1, validation_split=0.1, verbose=0)
var1, var2 = model.layers[-1].get_vars()
var1_list.append(var1.numpy())
var2_list.append(var2.numpy())
plt.plot(var1_list, label='var1')
plt.plot(var2_list, 'r', label='var2')
plt.legend()
plt.show()
(我還應該提到var1
這種特定模式, var2
高度依賴於它們的初始值,如果var1
的初始值高於 1 它實際上不會減少直到減去inf
)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.