简体   繁体   English

Tensorflow:optimizer.minimize()-“ ValueError:形状必须为0级,但为1级

[英]Tensorflow: optimizer.minimize() - "ValueError: Shape must be rank 0 but is rank 1

I am trying to adapt a Tensorflow r0.12 code (from https://github.com/ZZUTK/Face-Aging-CAAE ) to version 1.2.1 and I am having issues with optimizer.minimize(). 我正在尝试将Tensorflow r0.12代码(来自https://github.com/ZZUTK/Face-Aging-CAAE )改编为1.2.1版,并且Optimizer.minimize()存在问题。

In this case I am using GradientDescent, but the following error message is only slightly different (in terms of shapes provided) when I try with different optimizers: 在这种情况下,我使用的是GradientDescent,但是当我尝试使用不同的优化程序时,以下错误消息只是略有不同(就提供的形状而言):

ValueError: Shape must be rank 0 but is rank 1 for 'GradientDescent/update_E_con
v0/w/ApplyGradientDescent' (op: 'ApplyGradientDescent') 
with input shapes: [5,5,1,64], [1], [5,5,1,64].

Where [5,5] is my kernels size, 1 is the number of initial channels and 64 is the number of filters in the first convolution. 其中[5,5]是我的内核大小,1是初始通道数,64是第一次卷积中的过滤器数。 This is the convolutional encoder network it is referring to: 这就是它所指的卷积编码器网络:

E_conv0:    (100, 128, 128, 64)
E_conv1:    (100, 64, 64, 128)
E_conv2:    (100, 32, 32, 256)
E_conv3:    (100, 16, 16, 512)
E_conv4:    (100, 8, 8, 1024)
E_conv5:    (100, 4, 4, 2048)
...

This is code that's triggering the error: 这是触发错误的代码:

self.EG_optimizer = tf.train.GradientDescentOptimizer(
    learning_rate=EG_learning_rate, 
    beta1=beta1
).minimize(
    loss=self.loss_EG,
    global_step=self.EG_global_step,
    var_list=self.E_variables + self.G_variables
)

Where: 哪里:

EG_learning_rate = tf.train.exponential_decay(
    learning_rate=learning_rate,
    global_step=self.EG_global_step,
    decay_steps=size_data / self.size_batch * 2,
    decay_rate=decay_rate,
    staircase=True
)

self.EG_global_step = tf.get_variable(name='global_step',shape=1, initializer=tf.constant_initializer(0), trainable=False)

And

self.E_variables = [var for var in trainable_variables if 'E_' in var.name]
self.G_variables = [var for var in trainable_variables if 'G_' in var.name]

self.loss_EG = tf.reduce_mean(tf.abs(self.input_image - self.G))

After some debugging I now believe the problem comes from the minimize() method. 经过一些调试后,我现在认为问题出在minimal()方法上。 The error seems to be attributed to the last parameter (var_list) but when I try to comment out the second or third parameter, the error remains the same and is just attributed to the first parameter (loss). 该错误似乎归因于最后一个参数(var_list),但是当我尝试注释掉第二个或第三个参数时,该错误保持不变,而仅归因于第一个参数(损失)。

I have changed the code with respect to the one currently on GitHub to adapt it to the new version, so I worked a lot on tf.variable_scope(tf.get_variable_scope(), reuse=True). 我已经针对当前GitHub上的代码进行了更改,以使其适应新版本,因此我在tf.variable_scope(tf.get_variable_scope(),复用= True)上做了很多工作。 Could this be the cause? 这可能是原因吗?

Thank you so much in advance! 提前非常感谢您!

It's tricky to decode, since it comes from an internal op, but this error message points to the cause: 解码很棘手,因为它来自内部操作,但是此错误消息指出了原因:

ValueError: Shape must be rank 0 but is rank 1 for 'GradientDescent/update_E_conv0/w/ApplyGradientDescent' (op: 'ApplyGradientDescent') with input shapes: [5,5,1,64], 1 , [5,5,1,64]. ValueError:形状必须为等级0,但对于'GradientDescent / update_E_conv0 / w / ApplyGradientDescent'(op:'ApplyGradientDescent'),输入形状为[5,5,1,64], 1 、、 [5,5,1 ,64]。

One of the inputs to the ApplyGradientDescent op is a rank 1 tensor (ie a vector) when it should be a rank 0 tensor (ie a scalar). ApplyGradientDescent op的输入之一是等级1张量(即向量),而它应该是等级0张量(即标量)。 Looking at the definition of the ApplyGradientDescent op , the only scalar input is alpha , or the learning rate. 查看ApplyGradientDescent op定义 ,唯一的标量输入是alpha或学习率。

Therefore, it appears that the EG_learning_rate tensor is a vector when it should be a scalar. 因此,当EG_learning_rate张量应该是标量时,它似乎是一个向量。 A simple fix would be to "slice" a scalar from the EG_learning_rate tensor when you construct the tf.train.GradientDescentOptimizer : 一个简单的解决方法是在构造tf.train.GradientDescentOptimizer时从EG_learning_rate张量“缩放”标量:

scalar_learning_rate = EG_learning_rate[0]

self.EG_optimizer = tf.train.GradientDescentOptimizer(
    learning_rate=scalar_learning_rate, 
    beta1=beta1
).minimize(
    loss=self.loss_EG,
    global_step=self.EG_global_step,
    var_list=self.E_variables + self.G_variables
)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 TensorFlow 推荐者 - ValueError:形状必须为 2 级,但为 3 级 - TensorFlow Recommenders - ValueError: Shape must be rank 2 but is rank 3 Tensorflow: ValueError: Shape must be rank 4 but is rank 5 - Tensorflow: ValueError: Shape must be rank 4 but is rank 5 Tensorflow:ValueError:Shape必须是等级2,但是等级3 - Tensorflow : ValueError: Shape must be rank 2 but is rank 3 Tensorflow - ValueError:Shape必须为1,但对于'ParseExample / ParseExample'为0 - Tensorflow - ValueError: Shape must be rank 1 but is rank 0 for 'ParseExample/ParseExample' Tensorflow Shape必须为1级但为2级 - Tensorflow Shape must be rank 1 but is rank 2 ValueError:形状必须为 2 级,但“MatMul”为 1 级 - ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' ValueError:Shape必须是2级,但是'MatMul'的排名是3 - ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' TensorFlow - 切片张量导致:ValueError: Shape (16491,) must have rank 3 - TensorFlow - Slicing tensor results in: ValueError: Shape (16491,) must have rank 3 使用LSTM RNN在tensorflow中进行分类,ValueError:Shape(1、10、5)必须具有等级2 - classification with LSTM RNN in tensorflow, ValueError: Shape (1, 10, 5) must have rank 2 Tensorflow LSTM抛出ValueError:Shape()的等级必须至少为2 - Tensorflow LSTM throws ValueError: Shape () must have rank at least 2
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM