简体   繁体   English

限制神经网络回归 (Keras) 中的输出总和

[英]Restrict the sum of outputs in a neural network regression (Keras)

I'm predicting 7 targets, which is ratio from one value, so for each sample sum of all predicted values should be 1. Except of using softmax at the output (which seems obviously incorrect) I just cant figure out other ways to restrict sum of all predicted outputs to be =1..我正在预测 7 个目标,这是一个值的比率,因此对于所有预测值的每个样本总和应该是 1。除了在输出中使用softmax (这似乎显然不正确)我只是想不出其他方法来限制总和在所有预测输出中 =1..
Thanks for any suggestuions.感谢您的任何建议。

input_x = Input(shape=(input_size,))
output = Dense(512, activation=PReLU())(input_x)
output = Dropout(0.5)(output)
output = Dense(512, activation=PReLU())(output)
output = Dropout(0.5)(output)
output = Dense(16, activation=PReLU())(output)
output = Dropout(0.3)(output)
outputs = Dense(output_size, activation='softmax')(output)
#outputs = [Dense(1, activation=PReLU())(output) for i in range(output_size)] #multioutput nn

nn = Model(inputs=input_x, outputs=outputs)
es = EarlyStopping(monitor='val_loss',min_delta=0,patience=10,verbose=1, mode='auto')
opt=Adam(lr=0.001, decay=1-0.995)
nn.compile(loss='mean_absolute_error', optimizer=opt)
history = nn.fit(X, Y, validation_data = (X_t, Y_t), epochs=100, verbose=1, callbacks=[es])

Example of targets:目标示例:

在此处输入图片说明

So, this is all ratios from one feature, sum for each row =1.因此,这是来自一个特征的所有比率,每行的总和 = 1。
For example Feature - 'Total' =100 points, A=25 points, B=25 points, all others - 10 points.例如特征 - 'Total' =100 分,A=25 分,B=25 分,所有其他 - 10 分。 So, my 7 target ratios will be 0.25/0.25/0.1/0.1/0.1/0.1/0.1.所以,我的 7 个目标比率将是 0.25/0.25/0.1/0.1/0.1/0.1/0.1。

I need to train and predict such ratios, so in future, knowing 'Total' we can restore points from predicted ratios.我需要训练和预测这样的比率,所以在未来,知道“总计”我们可以从预测的比率中恢复点数。

I think I understand your motivation, and also why "softmax won't cut it".我想我理解你的动机,以及为什么“softmax 不会削减它”。

This is because softmax doesn't scale linearly, so:这是因为 softmax 不会线性缩放,所以:

>>> from scipy.special import softmax
>>> softmax([1, 2, 3, 4])
array([0.0320586 , 0.08714432, 0.23688282, 0.64391426])
>>> softmax([1, 2, 3, 4]) * 10
array([0.32058603, 0.87144319, 2.36882818, 6.4391426 ])

Which looks nothing like the original array.它看起来与原始数组完全不同。

Don't dismiss softmax too easy though - it can handle special situations like negative values, zeros, zero sum of pre-activation signal... But if you want the final regression to be normalized to one, and expect the results to be non-negative, you can simply divide it by the sum:但是不要太容易忽略 softmax - 它可以处理特殊情况,例如负值,零,预激活信号的零和......但是如果您希望最终回归归一化为一,并期望结果为非-negative,您可以简单地将其除以总和:

input_x = Input(shape=(input_size,))
output = Dense(512, activation=PReLU())(input_x)
output = Dropout(0.5)(output)
output = Dense(512, activation=PReLU())(output)
output = Dropout(0.5)(output)
output = Dense(16, activation=PReLU())(output)
output = Dropout(0.3)(output)
outputs = Dense(output_size, activation='relu')(output)
outputs = Lambda(lambda x: x / K.sum(x))(outputs)

nn = Model(inputs=input_x, outputs=outputs)

The Dense layer of course needs a different activation than 'softmax' (relu or even linear is OK). Dense层当然需要与'softmax'不同的激活(relu 甚至线性都可以)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM