简体   繁体   English

我在将张量馈送到tensorflow中的图形时收到无效的参数错误?

[英]I get an Invalid Argument Error on feeding tensors to a graph in tensorflow?

I am using tensorflow version 1.5 on Windows 10. I am using the Tensorflow slim model of Inception V4 network which has been picked up from the Website, using their pretrained weights and adding my own layers at the end to classify 120 different objects. 我正在Windows 10上使用tensorflow 1.5版。我正在使用Tensorflow苗条模型的Inception V4网络,该模型已从网站上获取,并使用了他们预先训练的权重,最后添加了我自己的层来分类120个不同的对象。 This is the complete code except the lines containing the import modules and dataset paths. 这是完整的代码,除了包含导入模块和数据集路径的行。

image_size = 299
tf.logging.set_verbosity(tf.logging.INFO)
with slim.arg_scope(inception_blocks_v4.inception_v4_arg_scope()):
    X_input = tf.placeholder(tf.float32, shape = (None, image_size, image_size, 3))
    Y_label = tf.placeholder(tf.float32, shape = (None, num_classes))        
    targets = convert_to_onehot(labels_dir, no_of_features = num_classes)
    targets = tf.constant(targets, dtype = tf.float32)

    Images = [] 
    images = glob.glob(images_file_path)
    i = 0
    for my_img in images:
        image = mpimg.imread(my_img)[:, :, :3]
        image = tf.constant(image, dtype = tf.float32)
        Images.append(image)

    logits, end_points = inception_blocks_v4.inception_v4(inputs = X_input, num_classes = pre_num_classes, is_training = True, create_aux_logits= False)
    pretrained_weights = slim.assign_from_checkpoint_fn(ckpt_dir, slim.get_model_variables('InceptionV4'))
    with tf.Session() as sess:
        pretrained_weights(sess)

    my_layer = slim.fully_connected(logits, 560, activation_fn=tf.nn.relu, scope='myLayer1', weights_initializer = tf.truncated_normal_initializer(stddev = 0.001), weights_regularizer=slim.l2_regularizer(0.00005),biases_initializer = tf.truncated_normal_initializer(stddev=0.001), biases_regularizer=slim.l2_regularizer(0.00005))
    my_layer = slim.dropout(my_layer, keep_prob = 0.6, scope = 'myLayer2')
    my_layer = slim.fully_connected(my_layer, num_classes,activation_fn = tf.nn.relu,scope= 'myLayer3', weights_initializer = tf.truncated_normal_initializer(stddev=0.001), weights_regularizer=slim.l2_regularizer(0.00005), biases_initializer = tf.truncated_normal_initializer(stddev=0.001), biases_regularizer=slim.l2_regularizer(0.00005))
    my_layer_logits = slim.fully_connected(my_layer, num_classes, activation_fn=None,scope='myLayer4')  
    loss = tf.losses.softmax_cross_entropy(onehot_labels = Y_label, logits = my_layer_logits)  
    optimizer = tf.train.AdamOptimizer(learning_rate=0.0001)
    train_op = slim.learning.create_train_op(loss, optimizer) 
    images, labels = tf.train.batch([Images, targets], batch_size = 8, num_threads = 1, capacity = batch_size, enqueue_many=True)
    tensor_images = tf.convert_to_tensor(images, dtype = tf.float32)
    tensor_labels = tf.convert_to_tensor(labels, dtype = tf.float32)
    with tf.Session() as sess:
        print (tensor_images)
        print (tensor_labels)
    final_loss = slim.learning.train(train_op,logdir = new_ckpt_dir, number_of_steps = 1000, save_summaries_secs=5,log_every_n_steps=50)(feed_dict = {X_input:tensor_images ,Y_label: tensor_labels})  #{X_input:images ,Y_label: labels}

I have tried to pass the correct tensors of the data to the feed_dict of the graph during training operation step and having printed them giving me the following output. 我试图在训练操作步骤中将数据的正确张量传递给图的feed_dict,并打印它们,从而得到以下输出。

Tensor("batch:0", shape=(8, 299, 299, 3), dtype=float32, device=/device:CPU:0)
Tensor("batch:1", shape=(8, 120), dtype=float32, device=/device:CPU:0)

But it also outputs the following error: 但是它还会输出以下错误:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,120]
 [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[?,120], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

The correct way to feed data generated from tf.train.batch is like this: 馈送从tf.train.batch生成的数据的正确方法是这样的:

Construct you model as following: 如下构建模型:

logits, end_points = inception_blocks_v4.inception_v4(
    inputs = tensor_images, num_classes = pre_num_classes, 
    is_training = True, create_aux_logits= False)

And in you loss, you should use: 在您蒙受损失时,您应该使用:

loss = tf.losses.softmax_cross_entropy(
    onehot_labels = tf.one_hot(tensor_labels), logits = my_layer_logits)  

Feeding tensor into tf.placeholder is not supported currectly. 无法立即支持将tensor tf.placeholder入tf.placeholder。

Note: I assume that your tensor_labels is just the indice of labels. 注意:我假设您的tensor_labels只是标签的tensor_labels

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM