繁体   English   中英

张量流/keras中批量大小的自定义损失重量arrays

[英]Custom loss w weight arrays of batch size in tensorflow/keras

I am creating a custom loss function, which is a MAE( y_true , y_pred ), weighted by two arrays, a and b , where all four arrays are of the same size (10000 samples/timesteps).

def custom_loss(y_true, y_pred, a, b):
        mae = K.abs(y_true - y_pred)
        loss = mae * a * b
        return loss

问:如何将ab输入 function? 两者都应该像 y_true 和 y_pred 一样被拆分和洗牌。

到目前为止,我正在使用一个 LSTM 训练的数据 X 形状(样本 x 时间步长 x 变量)。 在这里,我尝试了 tf 的add_loss function 来完成这项工作,当将ab作为进一步的输入层传递时,由于数据形状不同而导致错误。

#LSTM
input_layer = Input(shape=input_shape)
in = LSTM(20, activation='relu', return_sequences=True)(input_layer)
out = LSTM(1, activation='linear', return_sequences=False)(in)

layer_a = Input(shape=(10000))
layer_b = Input(shape=(10000))

model = Model(inputs = [input_layer, layer_a, layer_b], outputs = out)  
model.add_loss(custom_loss(input_layer, out, layer_a, layer_b))
model.compile(loss=None, optimizer=Adam(0.01))

# X=data of shape 20 variables x 10000 timesteps, y, a, b = data of shape 10000 timesteps
model.fit(x=[X, a, b], y=y, batch_size=1, shuffle=True)

我该如何正确地做到这一点?

正如您所介绍的,您必须使用add_loss 请记住将所有变量(正确格式的真值、预测和额外张量)传递给您的损失。

n_sample = 100
timesteps = 30
features = 5

X = np.random.uniform(0,1, (n_sample,timesteps,features))
y = np.random.uniform(0,1, n_sample)
a = np.random.uniform(0,1, n_sample)
b = np.random.uniform(0,1, n_sample)

def custom_loss(y_true, y_pred, a, b):
    mae = K.abs(y_true - y_pred)
    loss = mae * a * b
    return loss


input_layer = Input(shape=(timesteps, features))
x = LSTM(20, activation='relu', return_sequences=True)(input_layer)
out = LSTM(1, activation='linear')(x)

layer_a = Input(shape=(1,))
layer_b = Input(shape=(1,))
target = Input(shape=(1,))

model = Model(inputs = [target, input_layer, layer_a, layer_b], outputs = out)  
model.add_loss(custom_loss(target, out, layer_a, layer_b))
model.compile(loss=None, optimizer=Adam(0.01))

model.fit(x=[y, X, a, b], y=None, shuffle=True, epochs=3)

在推理模式下使用 model(删除 y 作为输入,如果不需要,删除 a 和 b):

final_model = Model(model.inputs[1], model.output)
final_model.predict(X)

如果您只需要ab来计算损失 function,那么我会为您的自定义损失 function 编写一个包装器,并传递一个元组(y,a,b)作为您的标签。

像这样的东西:

n_sample = 100
timesteps = 30
features = 5

X = np.random.uniform(0,1, (n_sample,timesteps,features))
y = np.random.uniform(0,1, n_sample)
a = np.random.uniform(0,1, n_sample)
b = np.random.uniform(0,1, n_sample)

def custom_loss_wrapper(y_true, y_pred):
    def custom_loss(y_true, y_pred, a, b):
        mae = K.abs(y_true - y_pred)
        loss = mae * a * b
        return loss
    return custom_loss(y_true[0], y_pred, y_true[1], y_true[2])


input_layer = Input(shape=(timesteps, features))
x = LSTM(20, activation='relu', return_sequences=True)(input_layer)
out = LSTM(1, activation='linear')(x)

model = Model(inputs =input_layer, outputs = out)  
model.compile(loss=custom_loss_wrapper, optimizer=Adam(0.01))

model.fit(x=X, y=(y,a,b), shuffle=True, epochs=3)

它简化了网络架构并在推理时删除了不必要的layer_alayer_b

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM