简体   繁体   中英

Tensorflow unhashable type 'list' in sess.run

There are literally thousands of these posts but I haven't seen one yet that addresses my exact problem. Please feel free to close this if one exists.

I understand that lists are mutable in Python. As a result, we cannot store a list as a key in a dictionary.

I have the following code (a ton of it is left out because it is irrelevant):

with tf.Session() as sess:
    sess.run(init)
    step = 1

    while step * batch_size < training_iterations:
            for batch_x, batch_y in batch(train_x, train_y, batch_size):

            batch_x = np.reshape(batch_x, (batch_x.shape[0],
                                           1,
                                           batch_x.shape[1]))
            batch_x.astype(np.float32)

            batch_y = np.reshape(batch_y, (batch_y.shape[0], 1))
            batch_y.astype(np.float32)

            sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
            if step % display_step == 0:
                # Calculate batch accuracy
                acc = sess.run(accuracy,
                               feed_dict={x: batch_x, y: batch_y})
                # Calculate batch loss
                loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
                print("Iter " + str(step*batch_size) +
                      ", Minibatch Loss= " +
                      "{:.6f}".format(loss) + ", Training Accuracy= " +
                      "{:.5f}".format(acc))
        step += 1
    print("Optimization Finished!")

train_x is a [batch_size, num_features] numpy matrix

train_y is a [batch_size, num_results] numpy matrix

I have the following placeholders in my graph:

x = tf.placeholder(tf.float32, shape=(None, num_steps, num_input))
y = tf.placeholder(tf.float32, shape=(None, num_res))

So naturally I need to transform my train_x and train_y to get to the correct format tensorflow expects.

I do that with the following:

 batch_x = np.reshape(batch_x, (batch_x.shape[0],
                                1,
                                batch_x.shape[1]))

 batch_y = np.reshape(batch_y, (batch_y.shape[0], 1))

This result gives me two numpy.ndarray :

batch_x is of dimensions [batch_size, timesteps, features] batch_y is of dimensions [batch_size, num_results]

As expected by our graph.

Now when I pass these reshaped numpy.ndarray I get TypeError: Unhashable type list on the following line:

sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})

This seems strange to me because firing up python:

import numpy as np
a = np.zeros((10,3,4))
{a : 'test'}
TypeError: unhashable type: 'numpy.ndarray`

You can see I get an entirely different error message.

Further in my code I perform a series of transformations on the data:

x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, num_input])
x = tf.split(0, num_steps, x)


lstm_cell = rnn_cell.BasicLSTMCell(num_hidden, forget_bias=forget_bias)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)

And the only place a list occurs is after slicing, which results in a T sized list of tensors that rnn.rnn expects.

I am at a complete loss here. I feel like I'm staring right at the solution and I can't see it. Can anyone help me out here?

Thank you!

I feel kind of silly here but I am sure someone else will have this problem.

The line above where the tf.split results in a list is the problem.

I did not split these into separate functions, and modified x directly (as shown in my code) and never changed the names. So when the code ran in sess.run , x was no longer a tensor placeholder as expected but rather a list of tensors after transformation in the graph.

Renaming each transformation of x solved the problem.

I hope this helps someone.

This error also occurs if x and y in feed_dict={x: batch_x, y: batch_y} are for some reason lists. In my case I misspelled them as X and Y and these were lists in my code.

I accidentally set the variable x as a python list in the code.

Why it threw this error? because of _, loss = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) , batch_x or batch_y one of either is a list or tuple. They must be a tensor , so print the two variables types out for looking what's wrong with the code.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM