简体   繁体   English

以下代码是否为 Keras 中的多类分类提供召回?

[英]Does following code give recall for multiclass classification in Keras?

Does following code give recall for multiclass classification in Keras?以下代码是否为 Keras 中的多类分类提供召回? even though I am not passing y_true and y_pred while calling recall function in model.compile, it showed me result of the recall.即使我在 model.compile 中调用召回 function 时没有传递 y_true 和 y_pred,它也向我展示了召回的结果。

def recall(y_true, y_pred):
    y_true = K.ones_like(y_true) 
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    all_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    
    recall = true_positives / (all_positives + K.epsilon())
    return recall

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=[recall])

Yes, it works because recall is called multiple times under the hood inside model.fit() specifying those values.是的,它有效,因为在model.fit()内部多次调用召回,指定这些值。

It works in a way similar (more complex and optimized) to this :它的工作方式与类似(更复杂和优化):

accuracy = tf.keras.metrics.CategoricalAccuracy()
loss_fn = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()

# Iterate over the batches of a dataset.
for step, (x, y) in enumerate(dataset):
    with tf.GradientTape() as tape:
        logits = model(x)
        # Compute the loss value for this batch.
        loss_value = loss_fn(y, logits)

    # Update the state of the `accuracy` metric.
    accuracy.update_state(y, logits)

    # Update the weights of the model to minimize the loss value.
    gradients = tape.gradient(loss_value, model.trainable_weights)
    optimizer.apply_gradients(zip(gradients, model.trainable_weights))

    # Logging the current accuracy value so far.
    if step % 100 == 0:
        print('Step:', step)        
        print('Total running accuracy so far: %.3f' % accuracy.result())

This is called a Gradient Tape , and it can be used to perform a customized train loop.这称为渐变磁带,可用于执行自定义的训练循环。 Basically it exposes the gradients computed on the trainable tensors of your model.基本上,它公开了在 model 的可训练张量上计算的梯度。 It lets you update the weights of the model manually, so it is really useful for personalization.它允许您手动更新 model 的权重,因此对于个性化非常有用。 All this stuff is also done automatically inside model.fit() .所有这些东西也在model.fit()内部自动完成。 You don't need this, it is just to explain how things work.你不需要这个,它只是为了解释事情是如何工作的。

As you can see, at every batch of the dataset are computed the predictions, that is, the logits .如您所见,在每批数据集中计算预测,即logits The logits and the ground truth, that is the correct y values, are given as arguments to accuracy.update_state , just as it is done without you seeing it inside model.fit() . logits和 ground truth,即正确的y值,作为 arguments 提供给accuracy.update_state ,就像在model.fit()中没有看到它一样。 Even the order is the same, y_true and y are both the ground truth, and y_pred and logits the predictions.即使顺序相同, y_truey都是基本事实,而y_predlogits是预测。

I hope this has made things clearer.我希望这让事情变得更清楚了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM