簡體   English   中英

創建一個神經網絡來分類 mnist 數據集,而不使用 keras。 錯誤:通過張量損失時需要磁帶

[英]Create a neural network to classify mnist dataset without using keras. Error: tape is required when a Tensor loss is passed

這是代碼。 我已經完成了前向傳遞,但每當我運行它時都會收到一個錯誤,我不知道問題是什么。 我首先使用特征和標簽創建一個批次,進行前向傳遞並嘗試使用 keras SGD 優化器。

這是我得到的錯誤: 錯誤

這是我的代碼:

import tensorflow as tf
from tensorflow.keras.utils import to_categorical
import numpy as np
def batches(batch_size, features, labels):
    """
    Create batches of features and labels
    :param batch_size: The batch size
    :param features: List of features
    :param labels: List of labels
    :return: Batches of (Features, Labels)
    """
    assert len(features) == len(labels)
    outout_batches = []
    
    sample_size = len(features)
    features = tf.Variable(features, dtype='float32')
    for start_i in range(0, sample_size, batch_size):
        end_i = start_i + batch_size
        batch = (features[start_i:end_i], labels[start_i:end_i])
        outout_batches.append(batch)
        
    return outout_batches


def get_logits(features, weights, biases):
# network's forward pass, multiply inputs with weight
    return tf.add(tf.matmul(features, weights), biases)


def get_cost(logits, labels):
# returns the cost of the pass
    return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))

def vectorize(features):
# reshapes the features to a vector for input
    return features.reshape(features.shape[0], features.shape[1] * features.shape[2])

(train_x, train_y), (test_x, test_y) = tf.keras.datasets.mnist.load_data()
train_x, test_x = train_x.astype('float32'), test_x.astype('float32')
train_x, test_y = train_x.astype('float32'), test_y.astype('float32')
train_y, test_y = to_categorical(train_y, 10), to_categorical(test_y, 10)
train_x = vectorize(train_x)

n_inputs = 28 * 28
n_classes = 10

weights = tf.Variable(tf.random.normal([n_inputs, n_classes]), dtype='float32', name='weights')
biases = tf.Variable(tf.random.normal([n_classes]), dtype='float32', name='biases')

batch_list = batches(32, train_x, train_y)
for x, y in batch_list:
    logits = get_logits(x, weights, biases)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
    opt = tf.keras.optimizers.SGD(learning_rate=0.001)
    optimizer = opt.minimize(loss=cost)
   

這是因為您的損失是張量。 optimizer.minimize()中,損失參數可以是張量或可調用的。 如果是可調用的,則損失不應采用 arguments 並返回值以最小化。 如果loss 是張量,則必須傳遞 tape 參數

所以修改后的代碼可能是這樣的:

for x, y in batch_list:
  with tf.GradientTape() as tape:
    logits = get_logits(x, weights, biases)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
    opt = tf.keras.optimizers.SGD(learning_rate=0.001)
  optimizer = opt.minimize(loss=cost, var_list=[weights,biases], tape=tape)

嗨,我對您的代碼做了一些更改,不確定它是否適合您的情況,但我通常會這樣做

optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)

for x, y in batch_list:
    with tf.GradientTape() as tape:
        logits = get_logits(x, weights, biases)
        cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
        
    grads = tape.gradient(loss, [weights, biases])
    optimizer.apply_gradients(zip(grads, [weights, biases]))
    

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM