簡體   English   中英

roc_auc_score即將到來0測試准確率97%可能嗎?

[英]roc_auc_score is coming as 0 test accuracy 97% possible?

編輯(抱歉,確實應該發布更多詳細信息):

以下是整個代碼示例。

from __future__ import absolute_import
from __future__ import print_function
import numpy as np
import random
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Input, Flatten, Dense, Dropout, Lambda
from keras.optimizers import RMSprop
from keras import backend as K
from sklearn import metrics

num_classes = 10
epochs = 2


def euclidean_distance(vects):
    x, y = vects
    return K.sqrt(K.maximum(K.sum(K.square(x - y), axis=1, keepdims=True), K.epsilon()))


def eucl_dist_output_shape(shapes):
    shape1, shape2 = shapes
    return (shape1[0], 1)


def contrastive_loss(y_true, y_pred):
    '''Contrastive loss from Hadsell-et-al.'06
    http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
    '''
    margin = 1
    return K.mean(y_true * K.square(y_pred) +
                  (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))


def create_pairs(x, digit_indices):
    '''Positive and negative pair creation.
    Alternates between positive and negative pairs.
    '''
    pairs = []
    labels = []
    n = min([len(digit_indices[d]) for d in range(num_classes)]) - 1
    for d in range(num_classes):
        for i in range(n):
            z1, z2 = digit_indices[d][i], digit_indices[d][i + 1]
            pairs += [[x[z1], x[z2]]]
            inc = random.randrange(1, num_classes)
            dn = (d + inc) % num_classes
            z1, z2 = digit_indices[d][i], digit_indices[dn][i]
            pairs += [[x[z1], x[z2]]]
            labels += [1, 0]
    return np.array(pairs), np.array(labels)


def create_base_network(input_shape):
    '''Base network to be shared (eq. to feature extraction).
    '''
    input = Input(shape=input_shape)
    x = Flatten()(input)
    x = Dense(128, activation='relu')(x)
    x = Dropout(0.1)(x)
    x = Dense(128, activation='relu')(x)
    x = Dropout(0.1)(x)
    x = Dense(128, activation='relu')(x)
    return Model(input, x)


def compute_accuracy(y_true, y_pred):
    '''Compute classification accuracy with a fixed threshold on distances.
    '''
    pred = y_pred.ravel() < 0.5
    return np.mean(pred == y_true)


def accuracy(y_true, y_pred):
    '''Compute classification accuracy with a fixed threshold on distances.
    '''
    return K.mean(K.equal(y_true, K.cast(y_pred < 0.5, y_true.dtype)))


# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
input_shape = x_train.shape[1:]

# create training+test positive and negative pairs
digit_indices = [np.where(y_train == i)[0] for i in range(num_classes)]
tr_pairs, tr_y = create_pairs(x_train, digit_indices)

digit_indices = [np.where(y_test == i)[0] for i in range(num_classes)]
te_pairs, te_y = create_pairs(x_test, digit_indices)

# network definition
base_network = create_base_network(input_shape)

input_a = Input(shape=input_shape)
input_b = Input(shape=input_shape)

# because we re-use the same instance `base_network`,
# the weights of the network
# will be shared across the two branches
processed_a = base_network(input_a)
processed_b = base_network(input_b)

distance = Lambda(euclidean_distance,
                  output_shape=eucl_dist_output_shape)([processed_a, processed_b])

model = Model([input_a, input_b], distance)

# train
rms = RMSprop()
model.compile(loss=contrastive_loss, optimizer=rms, metrics=[accuracy])
model.fit([tr_pairs[:, 0], tr_pairs[:, 1]], tr_y,
          batch_size=128,
          epochs=epochs,
          validation_data=([te_pairs[:, 0], te_pairs[:, 1]], te_y))

# compute final accuracy on training and test sets
    y_pred = model.predict([tr_pairs[:, 0], tr_pairs[:, 1]])
tr_acc = compute_accuracy(tr_y, y_pred)
y_pred = model.predict([te_pairs[:, 0], te_pairs[:, 1]])
te_acc = compute_accuracy(te_y, y_pred)

print('* Accuracy on training set: %0.2f%%' % (100 * tr_acc))
print('* Accuracy on test set: %0.2f%%' % (100 * te_acc))

roc_auc_score = metrics.roc_auc_score(te_y, 1-y_pred)
print("roc_auc_score:  %0.2f" % roc_auc_score)

我正在嘗試學習暹羅網絡對比損失函數用法。 我從這里的Keras示例開始 我正在嘗試從scikit學習中繪制roc_auc_score ,它給了我0.00

Train on 108400 samples, validate on 17820 samples
Epoch 1/2
108400/108400 [==============================] - 6s 52us/step - loss: 0.0930 - accuracy: 0.8910 - val_loss: 0.0420 - val_accuracy: 0.9582
Epoch 2/2
108400/108400 [==============================] - 5s 49us/step - loss: 0.0390 - accuracy: 0.9615 - val_loss: 0.0295 - val_accuracy: 0.9710
* Accuracy on training set: 97.80%
* Accuracy on test set: 96.82%
roc_auc_score:  0.01

我覺得這里肯定有問題。 像肯定標簽和否定標簽可能不會以正確的方式傳遞到roc_auc_score

是否有人知道為什么會發生這種情況,以及如何在不手動設置pos_label情況下解決此問題。 請告訴我。 感謝您的時間。

ROC曲線是通過對分數進行閾值獲得的,通常使用大於運算符(>)來完成,但是模型產生的距離具有其他順序,其中接近零意味着兩個樣本相似,而距離越大則意味着不同的樣本。 這意味着必須使用<運算符將這些分數(您的距離)設置為閾值。

一個簡單的解決方案是翻轉模型的預測:

>>> metrics.roc_auc_score(tr_y, 1.0 - y_pred)
0.9954217433041488

從模型預測中減去一個,意味着現在可以使用>運算符對其進行閾值設置,這使AUC變得有意義。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM