簡體   English   中英

AttributeError: 'dict' object has no attribute 'train' 嘗試在 python 中使用 tensorflow 實現卷積神經網絡程序時出現錯誤

[英]AttributeError: 'dict' object has no attribute 'train' error when trying to implement a convolution neural network program using tensorflow in python

盡管我對該主題非常陌生,但我正在嘗試實現一個 CNN 程序,該程序可用於在不使用 Keras 的情況下識別圖像。 我目前正在將 python 與 Jupyter/Google Colab 一起使用。

在修復了我的代碼中出現的其他一些錯誤之后,我現在遇到了這個錯誤:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py:1761: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).
  warnings.warn('An interactive session is already active. This can '

Training the model....
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-28-1af250ed46b3> in <module>()
    108     for i in range(num_iterations):
    109         # Get the next batch of images
--> 110         batch = mnist.train.next_batch(batch_size)              #third got an error for this line -
    111         # x_batch, y_batch = mnist.train.next_batch(batch_size)
    112 

AttributeError: 'dict' object has no attribute 'train'

這是我當前的代碼:

!pip install tensorflow_datasets
!pip install --upgrade tensorflow
!pip install tensorflow-datasets
!pip install mnist
#!pip install tensorflow.examples.tutorials.mnist

import argparse
print ('argparse version: ', argparse.__version__)
import mnist
print ('MNIST version: ', mnist.__version__)
import tensorflow_datasets
print ('tensorflow_datasets version: ', tensorflow_datasets.__version__)
import tensorflow.compat.v1 as tf
print ('tf version: ', tf.__version__)
tf.disable_v2_behavior()
#from tensorflow.examples.tutorials.mnist import input_data


#def build_arg_parser():
#    parser = argparse.ArgumentParser(description='Build a CNN classifier \
#            using MNIST data')
#    parser.add_argument('--input-dir', dest='input_dir', type=str,
#            default='./mnist_data', help='Directory for storing data')
#    return parser

def get_weights(shape):
    data = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(data)

def get_biases(shape):
    data = tf.constant(0.1, shape=shape)
    return tf.Variable(data)

def create_layer(shape):
    # Get the weights and biases
    W = get_weights(shape)
    b = get_biases([shape[-1]])

    return W, b

def convolution_2d(x, W):
    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1],
            padding='SAME')

def max_pooling(x):
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
            strides=[1, 2, 2, 1], padding='SAME')

if __name__ == '__main__':
    #args = build_arg_parser().parse_args()

    # Get the MNIST data
    mnist = tensorflow_datasets.load('mnist')

    # The images are 28x28, so create the input layer
    # with 784 neurons (28x28=784)
    x = tf.placeholder(tf.float32, [None, 784])

    # Reshape 'x' into a 4D tensor
    x_image = tf.reshape(x, [-1, 28, 28, 1])

    # Define the first convolutional layer
    W_conv1, b_conv1 = create_layer([5, 5, 1, 32])

    # Convolve the image with weight tensor, add the
    # bias, and then apply the ReLU function
    h_conv1 = tf.nn.relu(convolution_2d(x_image, W_conv1) + b_conv1)

    # Apply the max pooling operator
    h_pool1 = max_pooling(h_conv1)

    # Define the second convolutional layer
    W_conv2, b_conv2 = create_layer([5, 5, 32, 64])

    # Convolve the output of previous layer with the
    # weight tensor, add the bias, and then apply
    # the ReLU function
    h_conv2 = tf.nn.relu(convolution_2d(h_pool1, W_conv2) + b_conv2)

    # Apply the max pooling operator
    h_pool2 = max_pooling(h_conv2)

    # Define the fully connected layer
    W_fc1, b_fc1 = create_layer([7 * 7 * 64, 1024])

    # Reshape the output of the previous layer
    h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])

    # Multiply the output of previous layer by the
    # weight tensor, add the bias, and then apply
    # the ReLU function
    h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

    # Define the dropout layer using a probability placeholder
    # for all the neurons
    keep_prob = tf.placeholder(tf.float32)
    h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

    # Define the readout layer (output layer)
    W_fc2, b_fc2 = create_layer([1024, 10])
    y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

    # Define the entropy loss and the optimizer
    y_loss = tf.placeholder(tf.float32, [None, 10])
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_loss, logits=y_conv))
    optimizer = tf.train.AdamOptimizer(1e-4).minimize(loss)

    # Define the accuracy computation
    predicted = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_loss, 1))
    accuracy = tf.reduce_mean(tf.cast(predicted, tf.float32))

    # Create and run a session
    sess = tf.InteractiveSession()
    init = tf.initialize_all_variables()
    sess.run(init)

    # Start training
    num_iterations = 21000
    batch_size = 75
    print('\nTraining the model....')
    for i in range(num_iterations):
        # Get the next batch of images
        batch = mnist.train.next_batch(batch_size)

        # Print progress
        if i % 50 == 0:
            cur_accuracy = accuracy.eval(feed_dict = {
                    x: batch[0], y_loss: batch[1], keep_prob: 1.0})
            print('Iteration', i, ', Accuracy =', cur_accuracy)

        Train on the current batch
        optimizer.run(feed_dict = {x: batch[0], y_loss: batch[1], keep_prob: 0.5})

    # Compute accuracy using test data
    print('Test accuracy =', accuracy.eval(feed_dict = {
            x: mnist.test.images, y_loss: mnist.test.labels,
            keep_prob: 1.0}))

我發現一些帖子的代碼與我完全相同,但它們的所有實現都以某種方式運行。 我嘗試查找解決方案,但最終沒有找到適合我的解決方案。

我發現的一篇帖子說“基本字典沒有'train'屬性。” 這讓我很好奇,如果字典通常沒有這些屬性,但其他人有相同的代碼工作,為什么我會得到這個錯誤。

另一個帖子使用了這條線:

x_batch, y_batch = mnist.train.next_batch(batch_size)

而不是行:

batch = mnist.train.next_batch(batch_size)

但似乎都不適合我。 我嘗試研究的其他更改/解決方案都沒有最終奏效。

有誰知道如何解決這個無屬性錯誤?

似乎您的代碼是舊的/過時的,這就是它不再起作用的原因。 TensorFlow 庫經常更改是很常見的,而且它們也經常破壞接口,因此舊代碼停止工作。

首先,您嘗試import mnist ,這是一些錯誤的模塊,它幾乎不包含任何代碼,而且似乎沒用,可能它以前有用並且可以工作,但現在不行。

function mnist.train.next_batch(...)也不再工作,因為它不再在數據集 mnist 中實現,可能它以前也工作過。

我決定實現我自己的助手 class MyDS來實現所有這些缺失的功能。 下面是您完整的更正代碼,包括我的 class(開頭):

if __name__ == '__main__':
    import tensorflow.compat.v1 as tf
    tf.enable_eager_execution()
    import tensorflow_datasets as tfds

    class MyDS(object):
        class SubDS(object):
            import numpy as np
            def __init__(self, ds, *, one_hot):
                np = self.__class__.np
                self.ds = [e for e in ds.as_numpy_iterator()]
                self.sds = {(k + 's') : np.stack([
                    (e[k] if len(e[k].shape) > 0 else e[k][None]).reshape(-1) for e in self.ds
                ], 0) for k in self.ds[0].keys()}
                self.one_hot = one_hot
                if one_hot is not None:
                    self.max_one_hot = np.max(self.sds[one_hot + 's'])
            def _to_one_hot(self, a, maxv):
                np = self.__class__.np
                na = np.zeros((a.shape[0], maxv + 1), dtype = a.dtype)
                for i, e in enumerate(a[:, 0]):
                    na[i, e] = True
                return na
            def _apply_one_hot(self, key, maxv):
                assert maxv >= self.max_one_hot, (maxv, self.max_one_hot)
                self.max_one_hot = maxv
                self.sds[key + 's'] = self._to_one_hot(self.sds[key + 's'], self.max_one_hot)
            def next_batch(self, num = 16):
                np = self.__class__.np
                idx = np.random.choice(len(self.ds), num)
                res = {k : np.stack([
                    (self.ds[i][k] if len(self.ds[i][k].shape) > 0 else self.ds[i][k][None]).reshape(-1) for i in idx
                ], 0) for k in self.ds[0].keys()}
                if self.one_hot is not None:
                    res[self.one_hot] = self._to_one_hot(res[self.one_hot], self.max_one_hot)
                for i, (k, v) in enumerate(list(res.items())):
                    res[i] = v
                return res
            def __getattr__(self, name):
                if name not in self.__dict__['sds']:
                    return self.__dict__[name]
                return self.__dict__['sds'][name]
        def __init__(self, name, *, one_hot = None):
            self.ds = tfds.load(name)
            self.sds = {}
            for k, v in self.ds.items():
                self.sds[k] = self.__class__.SubDS(self.ds[k], one_hot = one_hot)
            if one_hot is not None:
                maxh = max(e.max_one_hot for e in self.sds.values())
                for e in self.sds.values():
                    e._apply_one_hot(one_hot, maxh)
        def __getattr__(self, name):
            if name not in self.__dict__['sds']:
                return self.__dict__[name]
            return self.__dict__['sds'][name]
            
    # Get the MNIST data
    mnist = MyDS('mnist', one_hot = 'label') # tensorflow_datasets.load('mnist')

    import argparse
    print ('argparse version: ', argparse.__version__)
    #import mnist
    #print ('MNIST version: ', mnist.__version__)
    #import tensorflow_datasets
    print ('tensorflow_datasets version: ', tfds.__version__)
    #import tensorflow.compat.v1 as tf
    print ('tf version: ', tf.__version__)
    tf.disable_eager_execution()
    tf.disable_v2_behavior()
    #from tensorflow.examples.tutorials.mnist import input_data


    #def build_arg_parser():
    #    parser = argparse.ArgumentParser(description='Build a CNN classifier \
    #            using MNIST data')
    #    parser.add_argument('--input-dir', dest='input_dir', type=str,
    #            default='./mnist_data', help='Directory for storing data')
    #    return parser

    def get_weights(shape):
        data = tf.truncated_normal(shape, stddev=0.1)
        return tf.Variable(data)

    def get_biases(shape):
        data = tf.constant(0.1, shape=shape)
        return tf.Variable(data)

    def create_layer(shape):
        # Get the weights and biases
        W = get_weights(shape)
        b = get_biases([shape[-1]])

        return W, b

    def convolution_2d(x, W):
        return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1],
                padding='SAME')

    def max_pooling(x):
        return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                strides=[1, 2, 2, 1], padding='SAME')

    #args = build_arg_parser().parse_args()

    # The images are 28x28, so create the input layer
    # with 784 neurons (28x28=784)
    x = tf.placeholder(tf.float32, [None, 784])

    # Reshape 'x' into a 4D tensor
    x_image = tf.reshape(x, [-1, 28, 28, 1])

    # Define the first convolutional layer
    W_conv1, b_conv1 = create_layer([5, 5, 1, 32])

    # Convolve the image with weight tensor, add the
    # bias, and then apply the ReLU function
    h_conv1 = tf.nn.relu(convolution_2d(x_image, W_conv1) + b_conv1)

    # Apply the max pooling operator
    h_pool1 = max_pooling(h_conv1)

    # Define the second convolutional layer
    W_conv2, b_conv2 = create_layer([5, 5, 32, 64])

    # Convolve the output of previous layer with the
    # weight tensor, add the bias, and then apply
    # the ReLU function
    h_conv2 = tf.nn.relu(convolution_2d(h_pool1, W_conv2) + b_conv2)

    # Apply the max pooling operator
    h_pool2 = max_pooling(h_conv2)

    # Define the fully connected layer
    W_fc1, b_fc1 = create_layer([7 * 7 * 64, 1024])

    # Reshape the output of the previous layer
    h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])

    # Multiply the output of previous layer by the
    # weight tensor, add the bias, and then apply
    # the ReLU function
    h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

    # Define the dropout layer using a probability placeholder
    # for all the neurons
    keep_prob = tf.placeholder(tf.float32)
    h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

    # Define the readout layer (output layer)
    W_fc2, b_fc2 = create_layer([1024, 10])
    y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

    # Define the entropy loss and the optimizer
    y_loss = tf.placeholder(tf.float32, [None, 10])
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_loss, logits=y_conv))
    optimizer = tf.train.AdamOptimizer(1e-4).minimize(loss)

    # Define the accuracy computation
    predicted = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_loss, 1))
    accuracy = tf.reduce_mean(tf.cast(predicted, tf.float32))

    # Create and run a session
    sess = tf.InteractiveSession()
    init = tf.initialize_all_variables()
    sess.run(init)

    # Start training
    num_iterations = 21000
    batch_size = 75
    print('\nTraining the model....')
    for i in range(num_iterations):
        # Get the next batch of images
        batch = mnist.train.next_batch(batch_size)

        # Print progress
        if i % 50 == 0:
            cur_accuracy = accuracy.eval(feed_dict = {
                    x: batch[0], y_loss: batch[1], keep_prob: 1.0})
            print('Iteration', i, ', Accuracy =', cur_accuracy)

        # Train on the current batch
        optimizer.run(feed_dict = {x: batch[0], y_loss: batch[1], keep_prob: 0.5})

    # Compute accuracy using test data
    print('Test accuracy =', accuracy.eval(feed_dict = {
            x: mnist.test.images, y_loss: mnist.test.labels,
            keep_prob: 1.0}))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM