简体   繁体   中英

tensorflow shuffle_batch and feed_dict error

This is the main part of my code. I'm confused on function shuffle_batch and feed_dict .

In my code below, the features and labels I put into the function are " list ".(I also tried " array " before.But it seems doesn't matter.)

What I want to do is make my testing data(6144,26) and training data(1024,13) into batch:(100,26) and (100,13),then set them as the feed_dict for the placeholders.

My questions are:

1.The outputs of the function tf.train.batch_shuffle are Tensors .But I can not put tensors in the feed_dict,right?

2.When I compiled the last two rows,error says, got shape [6144, 26], but wanted [6144] .I know it may be a dimension error,but how can I fix it.

Thanks a lot.

import tensorflow as tf
import scipy.io as sio

#import signal matfile
#[('label', (8192, 13), 'double'), ('clipped_DMT', (8192, 26), 'double')]
file = sio.loadmat('DMTsignal.mat')

#get array(clipped_DMT)
data_cDMT = file['clipped_DMT']
#get array(label)
data_label = file['label']


with tf.variable_scope('split_cDMT'):
    cDMT_test_list = []
    cDMT_training_list = []
    for i in range(0,8192):
        if i % 4 == 0:
            cDMT_test_list.append(data_cDMT[i])
        else:
            cDMT_training_list.append(data_cDMT[i])


with tf.variable_scope('split_label'):
    label_test_list = []
    label_training_list = []
    for i in range(0,8192):
        if i % 4 == 0:
            label_test_list.append(data_label[i])
        else:
            label_training_list.append(data_label[i])


#set parameters
n_features = cDMT_training.shape[1]
n_labels   = label_training.shape[1]
learning_rate = 0.8
hidden_1 = 256
hidden_2 = 128
training_steps = 1000
BATCH_SIZE = 100


#set Graph input
with tf.variable_scope('cDMT_Inputs'):
    X = tf.placeholder(tf.float32,[None, n_features],name = 'Input_Data')
with tf.variable_scope('labels_Inputs'):
    Y = tf.placeholder(tf.float32,[None, n_labels],name = 'Label_Data')


#set variables 
#Initialize both W and b as tensors full of zeros
with tf.variable_scope('layerWeights'):
    h1 = tf.Variable(tf.random_normal([n_features,hidden_1]))
    h2 = tf.Variable(tf.random_normal([hidden_1,hidden_2]))
    w_out = tf.Variable(tf.random_normal([hidden_2,n_labels]))

with tf.variable_scope('layerBias'):
    b1 = tf.Variable(tf.random_normal([hidden_1]))
    b2 = tf.Variable(tf.random_normal([hidden_2]))
    b_out = tf.Variable(tf.random_normal([n_labels]))

#create model    
def neural_net(x):
    layer_1 = tf.add(tf.matmul(x,h1),b1)
    layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1,h2),b2))
    out_layer = tf.add(tf.matmul(layer_2,w_out),b_out)
    return out_layer

nn_out = neural_net(X)

#loss and optimizer
with tf.variable_scope('Loss'):
    loss = tf.reduce_mean(tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(logits = nn_out,labels = Y)))
with tf.name_scope('Train'):
    optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)


with tf.name_scope('Accuracy'):
    correct_prediction = tf.equal(tf.argmax(nn_out,1),tf.argmax(Y,1))
    #correct_prediction = tf.metrics.accuracy (labels = Y, predictions =nn_out)
    acc = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

# Initialize
init = tf.global_variables_initializer()


# start computing & training
with tf.Session() as sess:
    sess.run(init)
    for step in range(training_steps): 
        #set batch
        cmt_train_bat,label_train_bat = sess.run(tf.train.shuffle_batch([cDMT_training_list,label_training_list],batch_size = BATCH_SIZE,capacity=50000,min_after_dequeue=10000))
        cmt_test_bat,label_test_bat = sess.run(tf.train.shuffle_batch([cDMT_test_list,label_test_list],batch_size = BATCH_SIZE,capacity=50000,min_after_dequeue=10000))

From theSession.run doc:

The optional feed_dict argument allows the caller to override the value of tensors in the graph. Each key in feed_dict can be one of the following types:

  • If the key is a tf.Tensor , the value may be a Python scalar, string, list, or numpy ndarray that can be converted to the same dtype as that tensor. Additionally, if the key is a tf.placeholder , the shape of the value will be checked for compatibility with the placeholder.

  • ...

So you are right: for X and Y (which are placeholders) you can't feed a tensor and tf.train.shuffle_batch is not designed to work with placeholders.

You can follow one of two ways:

  • get rid of placeholders and use tf.TFRecordReader in combination with tf.train.shuffle_batch , as suggested here . This way you'll have only tensors in your model and you won't need to "feed" anything additionally.

  • batch and shuffle the data yourself in numpy and feed into placeholders. This takes just several lines of code, so I find it easier, though both paths are valid.

Take also into account performance considerations .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM