[英]In Tensorflow.keras 2.0, when a model has multiple outputs, how to define a flexible loss function for model.fit()?
Let's say here is a model with two outputs.假设这里是一个具有两个输出的 model。
import tensorflow as tf
import numpy as np
x = tf.keras.Input(shape=(35, 7), dtype=tf.float32) # (None, 35, 7)
net = tf.keras.layers.Dense(11, activation='relu')(x) # (None, 35, 11)
net = tf.reduce_max(net, axis=1, name='maxpool') # (None, 11)
a = tf.keras.layers.Dense(13, activation='relu')(net) # (None, 35, 11)
b = tf.keras.layers.Dense(17, activation='relu')(net) # (None, 35, 11)
model = tf.keras.Model(inputs=x, outputs=[a, b])
When I do model.compile(loss=loss_fn, optimizer='sgd')
:当我做
model.compile(loss=loss_fn, optimizer='sgd')
:
the model.fit(x=train, y=(label1, label2))
runs loss_fn
for each pair of output and label (ie, loss_fn(a, l1)
and loss_fn(b, l1)
). the
model.fit(x=train, y=(label1, label2))
runs loss_fn
for each pair of output and label (ie, loss_fn(a, l1)
and loss_fn(b, l1)
).
When I do model.compile(loss=[loss_fn1, loss_fn2], optimizer='sgd')
:当我做
model.compile(loss=[loss_fn1, loss_fn2], optimizer='sgd')
:
the model.fit(x=train, y=(label1, label2))
runs loss_fn1
for a
and loss_fn2
for b
(ie, loss_fn1(a, l1)
and loss_fn2(b, l1)
). model.fit(x=train, y=(label1, label2))
为a
运行loss_fn1
,为b
运行loss_fn2
(即loss_fn1(a, l1)
和loss_fn2(b, l1)
)。
So, basically it seems to handle outputs individually (paired with given corresponding labels).因此,基本上它似乎单独处理输出(与给定的相应标签配对)。
What if I have to define a loss function that should handle/consider both outputs together, and use the function with model.fit
?如果我必须定义一个应该同时处理/考虑两个输出的损失 function 并将 function 与
model.fit
一起使用怎么办?
(One thing I can think of is to concatenate outputs into one tensor, and separate them in a loss function. However, I don't want to go there since two output may not have consistent shape. Instead, is it possible, for example, something like...) (我能想到的一件事是将输出连接到一个张量中,并将它们分离为损失 function。但是,我不想 go 在那里,因为两个 Z78E6221F6393D,1356681 可能有不一致的形状。 , 就像是...)
def loss_fn(y_true, y_pred):
# I want to access both output ...
l1, l2 = y_true
a, b = y_pred
# ... do something about loss ...
return loss
You would concatenate your two Dense layers, and do exactly the same as you mentioned:您将连接您的两个 Dense 层,并执行与您提到的完全相同的操作:
import numpy as np
from tensorflow.keras.layers import Input, Dense, Concatenate
from tensorflow.keras.models import Model
import tensorflow.keras.backend as K
i = Input((10,))
x = Dense(10)(i)
a = Dense(3, use_bias=False)(x)
b = Dense(3, use_bias=False)(x)
# Now you concatenate both outputs,
# so nothing happens to them
c = Concatenate()([a,b])
m = Model(i, c)
def loss(y_true, y_pred):
# Do your loss on your subset
a, b = y_pred[:, :3], y_pred[:, 3:]
# Do something random
return K.mean(a*b)
m.compile("adam", loss)
m.fit(np.random.random((10, 10)),
np.random.random((10, 6)))
# Outputs:
# 10/10 [=======] - 0s 22ms/sample - loss: -0.2251
edit;编辑; haven't seen that actually @bit01 commented already the to go approach
还没有看到实际上@bit01 已经评论了 go 方法
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.