[英]Custom loss function which depends on another neural network in keras
I have a "How can I do that" question with keras : 我对keras有一个“我该怎么做”的问题:
Assuming that I have a first neural network, say NNa which has 4 inputs (x,y,z,t) which is already trained . 假设我有第一个神经网络,比如说NNa,它有4个已经训练过的输入(x,y,z,t)。 If I have a second neural network, say NNb, and that its loss function depends on the first neural network.
如果我有第二个神经网络,例如NNb,那么它的损失函数取决于第一个神经网络。
The custom loss function of NNb customLossNNb
calls the prediction of NNa with a fixed grid (x,y,z) and just modify the last variable t. NNb的自定义损失函数
customLossNNb
调用具有固定网格(x,y,z)的NNa的预测,只需修改最后一个变量t。
Here in pseudo-python-code what I would like to do to traine the second NN : NNb: 在这里,我要用伪python代码训练第二个NN:NNb:
grid=np.mgrid[0:10:1,0:10:1,0:10:1].reshape(3,-1).T
Y[:,0]=time
Y[:,1]=something
def customLossNNb(NNa,grid):
def diff(y_true,y_pred):
for ii in range(y_true.shape[0]):
currentInput=concatenation of grid and y_true[ii,0]
toto[ii,:]=NNa.predict(currentInput)
#some stuff with toto
return #...
return diff
Then 然后
NNb.compile(loss=customLossNNb(NNa,K.variable(grid)),optimizer='Adam')
NNb.fit(input,Y)
In fact the line that cause me troubles is currentInput=concatenation of grid and y_true[ii,0]
实际上,引起我麻烦的那一行是
currentInput=concatenation of grid and y_true[ii,0]
I tried to send to customLossNNb the grid as a tensor with K.variable(grid)
. 我试图使用
K.variable(grid)
将网格作为张量发送到customLossNNb。 But I can't defined a new tensor inside the loss function, something like CurrentY
which has a shape (grid.shape[0],1)
fill with y[ii,0]
( ie the current t) and then concatenate grid
and currentY
to build currentInput
但是我无法在损失函数中定义新的张量,例如
CurrentY
,其形状为(grid.shape[0],1)
填充y[ii,0]
( 即当前t),然后连接grid
并currentY
建立currentInput
Any ideas? 有任何想法吗?
Thanks 谢谢
You can include your custom loss function into the graph using functional API of keras. 您可以使用keras的功能API将自定义损失函数包括在图中。 The model in this case can be used as a function, something like this:
在这种情况下,该模型可以用作函数,如下所示:
for l in NNa.layers:
l.trainable=False
x=Input(size)
y=NNb(x)
z=NNa(y)
Predict method will not work, since loss function should be part of the graph, and predict method returns np.array 预测方法将不起作用,因为损失函数应该是图形的一部分,并且预测方法返回np.array
First, make NNa
untrainable. 首先,使
NNa
不可训练。 Notice that you should do this recursively if your model has inner models. 请注意,如果您的模型具有内部模型,则应递归执行此操作。
def makeUntrainable(layer):
layer.trainable = False
if hasattr(layer, 'layers'):
for l in layer.layers:
makeUntrainable(l)
makeUntrainable(NNa)
Then you have two options: 然后,您有两个选择:
y_true
and y_pred
will be changed) y_true
和y_pred
都将被更改)
NNa
inside it, without changing your targets NNa
的自定义损失函数,而无需更改目标 inputs = NNb.inputs
outputs = NNa(NNb.outputs) #make sure NNb is outputing 4 tensors to match NNa inputs
fullModel = Model(inputs,outputs)
#changing the targets:
newY_train = NNa.predict(oldY_train)
Warning: please test whether NNa's weights are really frozen while training this configuration
警告:在训练此配置时,请测试NNa的重量是否真的冻结了
from keras.losses import binary_crossentropy
def customLoss(true,pred):
true = NNa(true)
pred = NNa(pred)
#use some of the usual losses or create your own
binary_crossentropy(true,pred)
NNb.compile(optimizer=anything, loss = customLoss)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.