繁体   English   中英

使用 keras 计算每个时期的 Fscore(非批量)

[英]Calculating Fscore for each epoch using keras (not batch-wise)

这个问题的实质:

我想找到一种正确的方法来计算每个时期之后的验证和训练数据的 Fscore(不是批量)

对于二进制分类任务,我想使用简单的keras model 计算每个时期后的Fscore 但如何计算Fscore似乎颇受讨论。

我知道keras分批工作,计算每批fscore 的一种方法是https://stackoverflow.com/a/45305384/10053244 (Fscore-calculation: f1 )。

批量计算可能会非常混乱,我更喜欢在每个 epoch 之后计算 Fscore 。 因此,仅调用history.history['f1']history.history['val_f1']不能解决问题,因为它会显示批量 fscores。

我想出一种方法是使用from keras.callbacks import ModelCheckpoint function 保存每个 model:

  1. 在每个 epoch 后保存每个模型权重
  2. 重新加载 model 并使用model.evaluatemodel.predict

编辑:

使用 tensorflow 后端,我决定跟踪TruePositivesFalsePositivesFalseNegatives (正如 umbreon29 建议的那样)。 但是现在有趣的部分来了:重新加载 model 时的结果对于训练数据是不同的(TP、FP、FN 不同) ,但对于验证集却没有!

因此,一个简单的 model 存储权重以重建每个 model 并重新计算 TP、FN、TP(最后是 Fscore)如下所示:

from keras.metrics import TruePositives, TrueNegatives, FalseNegatives, FalsePositives

## simple keras model
sequence_input = Input(shape=(input_dim,), dtype='float32')
preds = Dense(1, activation='sigmoid',name='output')(sequence_input)
model = Model(sequence_input, preds)

model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=[TruePositives(name='true_positives'),
                       TrueNegatives(name='true_negatives'),
                       FalseNegatives(name='false_negatives'),
                       FalsePositives(name='false_positives'),
                       f1])

# model checkpoints
filepath="weights-improvement-{epoch:02d}-{val_f1:.2f}.hdf5"
checkpoint = ModelCheckpoint(os.path.join(savemodel,filepath), monitor='val_f1', verbose=1, save_best_only=False, save_weights_only=True, mode='auto')
callbacks_list = [checkpoint]

history = model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=epoch, batch_size=batch,
                    callbacks=[callbacks_list])

## Saving TP, FN, FP to calculate Fscore
tp.append(history.history['true_positives'])
fp.append(history.history['false_positives'])
fn.append(history.history['false_negatives'])

arr_train = np.stack((tp, fp, fn), axis=1)

## doing the same for tp_val, fp_val, fn_val 
[...]
arr_val = np.stack((tp_val, fp_val, fn_val), axis=1)

## following method just showes batch-wise fscores and shouldnt be used:
## f1_sc.append(history.history['f1'])  

在每个 epoch 之后重新加载 model 以计算 Fscores(使用 sklearn fscore metric from sklearn.metrics import f1_scorepredict方法等效于从 TP,FP,FN 计算 fscore metric):

Fscore_val = []
fscorepredict_val_sklearn = []
Fscore_train = []
fscorepredict_train = []

## model_loads contains list of model-paths
for i in model_loads:
    ## rebuilding the model each time since only weights are stored
    sequence_input = Input(shape=(input_dim,), dtype='float32')
    preds = Dense(1, activation='sigmoid',name='output')(sequence_input)
    model = Model(sequence_input, preds)
    model.load_weights(i)

    # Compile model (required to make predictions)
    model.compile(loss='binary_crossentropy',
                  optimizer='adam',
                  metrics=[TruePositives(name='true_positives'),
                           TrueNegatives(name='true_negatives'),
                           FalseNegatives(name='false_negatives'),
                           FalsePositives(name='false_positives'),
                           f1
                           ])    

    ### For Validation data
    ## using evaluate
    y_pred =  model.evaluate(x_val, y_val, verbose=0)
    Fscore_val.append(y_pred)  ## contains (loss,tp,fp,fn, f1-batchwise)

    ## using predict
    y_pred = model.predict(x_val)
    val_preds = [1 if x > 0.5 else 0 for x in y_pred]
    cm = f1_score(y_val, val_preds)
    fscorepredict_val_sklearn.append(cm)  ## equivalent to Fscore calculated from Fscore_vals tp,fp, fn


    ### For the training data
    y_pred =  model.evaluate(x_train, y_train, verbose=0) 
    Fscore_train.append(y_pred) ## also contains (loss,tp,fp,fn, f1-batchwise)

    y_pred =  model.predict(x_train, verbose=0)  # gives probabilities
    train_preds = [1 if x > 0.5 else 0 for x in y_pred]
    cm = f1_score(y_train, train_preds)
    fscorepredict_train.append(cm)

使用Fscore_val的 tp,fn,fp 从 tp,fn 和 fp 计算 Fscore 并将其与fscorepredict_val_sklearn进行比较与从arr_val计算它是等效的和相同的。

但是,比较Fscore_trainarr_train时,tp、fn 和 fp 的数量是不同的。 因此,我也得出了不同的 Fscores。 tp,fn,fp 的数量应该是相同的,但它们不是。这是一个错误吗?

我应该相信哪一个? fscorepredict_train似乎实际上更值得信赖,因为它们从“总是猜测 class 1”-Fscore 开始(当召回 = 1 时)。 fscorepredict_train[0]=0.6784 vs f_hist[0]=0.5736 vs always-guessing-class-1-fscore = 0.6751)

[注: Fscore_train[0] = [0.6853608025386962, 2220.0, 250.0, 111.0, 1993.0, 0.6730511784553528] (loss,tp,tn,fp,fn) 导致 fscore= 0.6784, 所以 Fscore_train 中的 Fscore = fscoredict_train]

我提供了一个自定义回调,用于计算时期结束时所有数据的分数(在你的情况下是来自 sklearn 的 F1)(用于训练和可选的验证)

class F1History(tf.keras.callbacks.Callback):

    def __init__(self, train, validation=None):
        super(F1History, self).__init__()
        self.validation = validation
        self.train = train

    def on_epoch_end(self, epoch, logs={}):

        logs['F1_score_train'] = float('-inf')
        X_train, y_train = self.train[0], self.train[1]
        y_pred = (self.model.predict(X_train).ravel()>0.5)+0
        score = f1_score(y_train, y_pred)       

        if (self.validation):
            logs['F1_score_val'] = float('-inf')
            X_valid, y_valid = self.validation[0], self.validation[1]
            y_val_pred = (self.model.predict(X_valid).ravel()>0.5)+0
            val_score = f1_score(y_valid, y_val_pred)
            logs['F1_score_train'] = np.round(score, 5)
            logs['F1_score_val'] = np.round(val_score, 5)
        else:
            logs['F1_score_train'] = np.round(score, 5)

这是一个虚拟示例:

x_train = np.random.uniform(0,1, (30,10))
y_train = np.random.randint(0,2, (30))

x_val = np.random.uniform(0,1, (20,10))
y_val = np.random.randint(0,2, (20))

sequence_input = Input(shape=(10,), dtype='float32')
preds = Dense(1, activation='sigmoid',name='output')(sequence_input)
model = Model(sequence_input, preds)

es = EarlyStopping(patience=3, verbose=1, min_delta=0.001, monitor='F1_score_val', mode='max', restore_best_weights=True)
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(x_train,y_train, epochs=10, 
          callbacks=[F1History(train=(x_train,y_train),validation=(x_val,y_val)),es])

output 打印:

Epoch 1/10
1/1 [==============================] - 0s 78ms/step - loss: 0.7453 - F1_score_train: 0.3478 - F1_score_val: 0.4762
Epoch 2/10
1/1 [==============================] - 0s 57ms/step - loss: 0.7448 - F1_score_train: 0.3478 - F1_score_val: 0.4762
Epoch 3/10
1/1 [==============================] - 0s 58ms/step - loss: 0.7444 - F1_score_train: 0.3478 - F1_score_val: 0.4762
Epoch 4/10
1/1 [==============================] - ETA: 0s - loss: 0.7439Restoring model weights from the end of the best epoch.
1/1 [==============================] - 0s 70ms/step - loss: 0.7439 - F1_score_train: 0.3478 - F1_score_val: 0.4762

我有 TF 2.2 并且可以正常工作,希望对您有所帮助

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM