简体   繁体   中英

Create a neural network to classify mnist dataset without using keras. Error: tape is required when a Tensor loss is passed

Here is the code. I have done the forward pass but I receive an error anytime I run it and I don't know what the problem is. I first of all create a batch with the features and labels, do the forward pass and try to use keras SGD optimizer.

This is the error I get: 错误

And this is my code:

import tensorflow as tf
from tensorflow.keras.utils import to_categorical
import numpy as np
def batches(batch_size, features, labels):
    """
    Create batches of features and labels
    :param batch_size: The batch size
    :param features: List of features
    :param labels: List of labels
    :return: Batches of (Features, Labels)
    """
    assert len(features) == len(labels)
    outout_batches = []
    
    sample_size = len(features)
    features = tf.Variable(features, dtype='float32')
    for start_i in range(0, sample_size, batch_size):
        end_i = start_i + batch_size
        batch = (features[start_i:end_i], labels[start_i:end_i])
        outout_batches.append(batch)
        
    return outout_batches


def get_logits(features, weights, biases):
# network's forward pass, multiply inputs with weight
    return tf.add(tf.matmul(features, weights), biases)


def get_cost(logits, labels):
# returns the cost of the pass
    return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))

def vectorize(features):
# reshapes the features to a vector for input
    return features.reshape(features.shape[0], features.shape[1] * features.shape[2])

(train_x, train_y), (test_x, test_y) = tf.keras.datasets.mnist.load_data()
train_x, test_x = train_x.astype('float32'), test_x.astype('float32')
train_x, test_y = train_x.astype('float32'), test_y.astype('float32')
train_y, test_y = to_categorical(train_y, 10), to_categorical(test_y, 10)
train_x = vectorize(train_x)

n_inputs = 28 * 28
n_classes = 10

weights = tf.Variable(tf.random.normal([n_inputs, n_classes]), dtype='float32', name='weights')
biases = tf.Variable(tf.random.normal([n_classes]), dtype='float32', name='biases')

batch_list = batches(32, train_x, train_y)
for x, y in batch_list:
    logits = get_logits(x, weights, biases)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
    opt = tf.keras.optimizers.SGD(learning_rate=0.001)
    optimizer = opt.minimize(loss=cost)
   

This is because your loss is a Tensor. In the optimizer.minimize() the loss argument can be a Tensor or callable. If a callable, loss should take no arguments and return the value to minimize. If loss is a Tensor, the tape argument must be passed .

So the modified code could be like this:

for x, y in batch_list:
  with tf.GradientTape() as tape:
    logits = get_logits(x, weights, biases)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
    opt = tf.keras.optimizers.SGD(learning_rate=0.001)
  optimizer = opt.minimize(loss=cost, var_list=[weights,biases], tape=tape)

Hi i made a little change towards your code, not sure if it fits your situation but i would normally do it like this

optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)

for x, y in batch_list:
    with tf.GradientTape() as tape:
        logits = get_logits(x, weights, biases)
        cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
        
    grads = tape.gradient(loss, [weights, biases])
    optimizer.apply_gradients(zip(grads, [weights, biases]))
    

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM