簡體   English   中英

tf.nn.in_top_k(logits,y,1) 超出范圍錯誤但實際上相等

[英]tf.nn.in_top_k(logits,y,1) out of range error but equal actually

我正在使用二元分類進行我的第一個神經網絡,但是當我嘗試使用以下方法評估模型時出現錯誤:

correct = tf.nn.in_top_k(logits,y,1)

在哪里

  • logits 張量是:預測:形狀 [batch_size = 52, num_classes = 1],類型float32
  • y 張量是:目標:形狀 [batch_size=52],類型為 int32

我收到此錯誤:

targets[1] is out of range
     [[{{node in_top_k/InTopKV2}}]]

經過一段時間的調試,我明白我的張量 y 的值必須 <= 到 num_classes,所以等於 1 的張量 y 的第一個值被認為超出范圍,即使參數 num_classes = 1 也很困難。

我怎樣才能讓我的張量值等於 num_classes 並且只比 num_classes 差? 或者還有其他方法嗎?

在我看來,num_classes 應該等於 1,因為它是一個二元分類,所以需要 1 個神經元輸出。


編輯這是我的完整代碼

import tensorflow as tf
n_inputs = 28 
n_hidden1 = 15
n_hidden2 = 5
n_outputs = 1
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") 
y = tf.placeholder(tf.int32, shape=(None), name="y")   #None => any
def neuron_layer(X, n_neurons, name, activation=None):
    with tf.name_scope(name):
        n_inputs = int(X.shape[1])
        stddev = 2 / np.sqrt(n_inputs) 
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev) #matrice n_inputs x n_neurons values proche de 0    
        W = tf.Variable(init,name="kernel")  #weights random
        b = tf.Variable(tf.zeros([n_neurons]), name="bias")
        Z = tf.matmul(X, W) + b
        tf.cast(Z,tf.int32)
        if activation is not None:
            return activation(Z)
        else:
            return Z
def to_one_hot(y):
    n_classes = y.max() + 1
    m = len(y)
    Y_one_hot = np.zeros((m, n_classes))
    Y_one_hot[np.arange(m), y] = 1
    return Y_one_hot
hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
                           activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
                           activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
xentropy = tf.keras.backend.binary_crossentropy(tf.to_float(y),logits) 
loss = tf.reduce_mean(xentropy)
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits,y,1)
labels_max = tf.reduce_max(y)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 40
batch_size = 50
def shuffle_batch(X, y, batch_size):  #Homogeneisation et decoupage en paquets(n_batches)
    rnd_idx = np.random.permutation(len(X))
    n_batches = len(X) // batch_size
    for batch_idx in np.array_split(rnd_idx, n_batches):
        X_batch, y_batch = X[batch_idx], y[batch_idx]
        yield X_batch, y_batch


with tf.Session() as sess:
    init.run()
    X_temp,Y_temp = X_batch,y_batch
    feed_dict={X: X_batch, y: y_batch}
    print("feed",feed_dict)
    print("\n y_batch :",y_batch,y_batch.dtype)
    print("\n X_batch :",X_batch,X_batch.dtype,X_batch.shape)

    for epoch in range(n_epochs):
        for X_batch, y_batch in shuffle_batch(X_train, Y_train, batch_size):
            y_batch=y_batch.astype(np.int32)
            X_batch=X_batch.astype(np.float32)
            sess.run(training_op,feed_dict={X: X_batch, y: y_batch})
        #acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
        #acc_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})
        #print(epoch, "Batch accuracy:", acc_batch, "Val accuracy:", acc_val)
    save_path = saver.save(sess, "./my_model_final.ckpt")
    #some tests
    print("y eval :",y.eval(feed_dict={X:X_temp,y:Y_temp}).shape)

    y_one_hot=to_one_hot(y.eval(feed_dict={X:X_temp,y:Y_temp}))
    print("y_one_hot :",y_one_hot.shape)

    print("logits eval : ",logits.eval(feed_dict={X:X_temp,y:Y_temp}))
    #print(correct.eval(feed_dict={X:X_temp,y:Y_temp}))
    print(labels_max.eval(feed_dict={X:X_temp,y:Y_temp}))

根據此處的文檔, tf.nn.in_top_k(predictions, targets, k)有參數:

  • predictions :float32 類型的張量。 一個batch_size x classes張量。
  • targets :張量。 必須是以下類型之一:int32、int64。 類 id 的batch_size向量。
  • k :一個整數。 要查看計算精度的頂級元素的數量。

由於您正在執行二元分類,即有兩個類別,因此您的情況下logits張量的形狀應為(52, 2)y的形狀應為(52,) 在這里, logits基本上one-hot encoded張量。 這就是您遇到上述錯誤的原因。

考慮下面的例子:

示例 1

res = tf.nn.in_top_k([[0,1], [1,0], [0,1], [1, 0], [0, 1]], [0, 1, 1, 1, 1], 1)

這里, logits形狀是 (5, 2) 而y是 (5,)。 如果你會做tf.reduce_max(y) ,你會得到1 ,它小於類的數量,因此沒問題。

這將正常工作並輸出[False False True False True]

示例 2

res = tf.nn.in_top_k([[0,1], [1,0], [0,1], [1, 0], [0, 1]], [0, 2, 1, 1, 1], 1)

如果你會做tf.reduce_max(y) ,你會得到2 ,它等於類的數量。 這將引發錯誤: InvalidArgumentError: targets[1] is out of range

編輯:在上面的代碼中,進行以下修改:

  • n_outputs = 1更改為n_outputs = 2
  • sess.run(training_op,feed_dict={X: X_batch, y: y_batch})更改為_, cost, acc = sess.run([training_op, loss, accuracy], feed_dict={X: X_batch, y: to_one_hot(y_batch)})
  • correct = tf.nn.in_top_k(logits, y, 1)更改為correct = tf.nn.in_top_k(logits, tf.argmax(y, 1), 1)

代碼(使用隨機數據):

n_inputs = 28 
n_hidden1 = 15
n_hidden2 = 5
n_outputs = 2

X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") 
y = tf.placeholder(tf.int32, shape=(None, 2), name="y")   #None => any

def neuron_layer(X, n_neurons, name, activation=None):
    with tf.name_scope(name):
        n_inputs = int(X.shape[1])
        stddev = 2 / np.sqrt(n_inputs) 
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev) #matrice n_inputs x n_neurons values proche de 0    
        W = tf.Variable(init,name="kernel")  #weights random
        b = tf.Variable(tf.zeros([n_neurons]), name="bias")
        Z = tf.matmul(X, W) + b
        tf.cast(Z,tf.int32)
        if activation is not None:
            return activation(Z)
        else:
            return Z

def to_one_hot(y):
    n_classes = y.max() + 1
    m = len(y)
    Y_one_hot = np.zeros((m, n_classes))
    Y_one_hot[np.arange(m), y] = 1
    return Y_one_hot

hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
                           activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
                           activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
xentropy = tf.keras.backend.binary_crossentropy(tf.to_float(y),logits) 
loss = tf.reduce_mean(xentropy)
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits,tf.argmax(y, 1),1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 1

X_train = np.random.rand(100, 28)
X_train = X_train.astype(np.float32)

Y_train = np.random.randint(low = 0, high = 2, size = 100, dtype=np.int32) 

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        _, cost, corr, acc = sess.run([training_op, loss, correct, accuracy], feed_dict={X: X_train, y: to_one_hot(Y_train)})
        print(corr)
        print('Loss: {} Accuracy: {}'.format(cost, acc))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM