[英]TensorFlow logical operation ((A == B) && (C == D)) results in “Incompatible shapes: [2] vs. [3]”
I'm trying to build the following logical expression - 我正在尝试构建以下逻辑表达式-
tf.logical_and(tf.equal(tf.argmax(y_conv,0), tf.argmax(y_,0)),
tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)), name=None)
But it results in the below error - 但这会导致以下错误-
Incompatible shapes: [2] vs. [3] 不兼容的形状:[2]与[3]
tf.equal(tf.argmax(y_conv,0), tf.argmax(y_,0))
and tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
work fine separately, the error occurs only with tf.logical_and
. tf.equal(tf.argmax(y_conv,0), tf.argmax(y_,0))
和tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
工作正常,仅tf.logical_and
会发生错误。 tf.logical_and
expects boolean tensors and tf.equal
returns boolean tensors, so all arguments are in order so not sure why it fails. tf.logical_and
期望布尔张量,而tf.equal
返回布尔张量,因此所有参数都顺序正确,因此不确定为什么失败。
To give some context, the original code is below and I'm just trying to update correct_prediction
to include both 0 and 1 for tf.argmax
给予一定的情况下,原来的代码如下,我只是想更新correct_prediction
包括两个0和1 tf.argmax
UPDATE1 Start (this adds all of the variable declarations) UPDATE1开始(这将添加所有变量声明)
sess = tf.InteractiveSession()
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([3, 3, 1, 32])
b_conv1 = bias_variable([32])
x = tf.placeholder(tf.float32, shape=[None, 9])
x_image = tf.reshape(x, [-1,3,3,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
h_pool2_flat = tf.reshape(h_conv1, [-1, 3 * 3 * 32])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
W_fc2 = weight_variable([64, 2])
b_fc2 = bias_variable([2])
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
UPDATE1 End (this adds all of the variable declarations) UPDATE1 End(这将添加所有变量声明)
This is where the problem is located - 这是问题所在的位置-
y_ = tf.placeholder(tf.float32, shape=[None, 2])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
#This works - correct_prediction = tf.equal(tf.argmax(y_conv,0), tf.argmax(y_,0)) . Changed it to -
correct_prediction = tf.logical_and(tf.equal(tf.argmax(y_conv,0), tf.argmax(y_,0)),
tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)),
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.global_variables_initializer())
train_step.run(feed_dict={x: xtrain, y_: ytrain, keep_prob: 0.5})
#In debugging mode, code breaks at the below line
print("test accuracy %g"%accuracy.eval(feed_dict={x: xtest, y_: ytest, keep_prob: 1.0}))
How can I debug this error? 如何调试此错误?
The problem arises because tf.equal()
is an elementwise operation and it returns a tensor with the same shape as its arguments. 出现问题是因为tf.equal()
是元素操作,并且它返回张量的形状与其参数相同。 The easiest way to fix your expression is to use tf.reduce_all()
to aggregate the results of tf.equal()
down to a scalar before computing the tf.logical_and()
, as follows: 修复你表达的最简单方法是使用tf.reduce_all()
聚合的结果tf.equal()
计算之前下降到标tf.logical_and()
如下所示:
tf.logical_and(
tf.reduce_all(tf.equal(tf.argmax(y_conv, 0), tf.argmax(y_, 0))),
tf.reduce_all(tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.