简体   繁体   中英

Tensorflow CNN 'tuple' object has no attribute 'initializer'

Seems like I am messing up a step in preparing the dataset. Couldn't find a proper answer or look up the correct solution in documentation. I have pointed out the problem line with ###, bottom part.

def parse_file(data_path):


    imagepaths = list()
    labels = list()
    # a working parser for os is here

    imagepaths = tf.constant(imagepaths, dtype=tf.string)
    labels = tf.constant(labels, dtype=tf.float32)

    return imagepaths, labels


def parse_image(imagepath, label):

    image_string = tf.read_file(imagepath)
    image_decoded = tf.image.decode_png(image_string, channels=3)
    # The image size is 425x425.
    image_resized = tf.image.resize_images(image_decoded, [img_size, img_size])
    image_normalized = image_resized * 1.0/255
    print(image_normalized)
    print(label)
    return image_normalized, label

dataset = tf.data.Dataset.from_tensor_slices((parsed_files))
dataset = dataset.map(parse_image)
dataset = dataset.batch(batch_size)

iterator = dataset.make_initializable_iterator()
iterator = iterator.get_next()

x = tf.placeholder(tf.float32, [None, img_size, img_size, channels])
y = tf.placeholder(tf.float32, [None, 1])

(Model goes here, irrelevant.)

with tf.Session() as sess:

    ### AttributeError: 'tuple' object has no attribute 'initializer'
    sess.run(iterator.initializer)
    batch_x, batch_y = iterator.get_next()
    test1, test2 = sess.run([batch_x, batch_y])
    total_batch = int(total_input[0] / batch_size)
    # define the iterator for the network
    for epoch in range(epochs):
        avg_cost = 0
        for i in range(total_batch):
            batch_x, batch_y = sess.run(iterator)
            _, c = sess.run([optimiser, cross_entropy], feed_dict={x: batch_x, y: batch_y})
            avg_cost += c / total_batch

        test_acc = sess.run(accuracy,feed_dict={x: test_x, y: np.expand_dims(test_y, axis=-1)})
        print("Epoch:", (epoch + 1), "cost =", "{:.3f}".format(avg_cost), " test accuracy: {:.3f}".format(test_acc))
        summary = sess.run(merged, feed_dict={x: test_x, y: np.expand_dims(test_y, axis=-1)})

    print("\nTraining complete!")
    print(sess.run(accuracy, feed_dict={x: test_x, y: np.expand_dims(test_y, axis=-1)}))

I've no experience with tf.Datasets but this is what might go wrong:

iterator = dataset.make_initializable_iterator()
iterator = iterator.get_next()

First you create an iterator, and you overwrite that by asking data from it with the .get_next method. This apparently gives you a tuple. Then you do:

sess.run(iterator.initializer)

And you get your error because your iterator is no longer the iterator from make_initializable_iterator(). Have you tried this:

iterator = dataset.make_initializable_iterator()
with tf.Session() as sess:
    sess.run(iterator.initializer)

You might get more errors after that though, but maybe I'm wrong since I'm not used to using tf.Datasets.

Check out this example I found here :

max_value = tf.placeholder(tf.int64, shape=[])
dataset = tf.data.Dataset.range(max_value)
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()

# Initialize an iterator over a dataset with 10 elements.
sess.run(iterator.initializer, feed_dict={max_value: 10})
for i in range(10):
    value = sess.run(next_element)
    assert i == value

# Initialize the same iterator over a dataset with 100 elements.
sess.run(iterator.initializer, feed_dict={max_value: 100})
for i in range(100):
    value = sess.run(next_element)
    assert i == value

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM