繁体   English   中英

CIFAR-10 TensorFlow:InvalidArgumentError(请参阅上面的回溯):logits和标签必须是可广播的

[英]CIFAR-10 TensorFlow: InvalidArgumentError (see above for traceback): logits and labels must be broadcastable

我正在实现CNN,如下所示,但出现此错误:

InvalidArgumentError(请参阅上面的回溯):登录信息和标签必须可广播

我在下面附加了我的部分代码。 我怀疑错误是由于我的体重和偏见的形状和尺寸引起的。

我要实现的目标-我想将CNN层从两个完全连接的层减少到仅一个完全连接的层,这意味着out=tf.add(tf.add(fc1....)并将其停在那里。

nInput = 32
nChannels = 3
nClasses = 10

# Placeholder and drop-out
X = tf.placeholder(tf.float32, [None, nInput, nInput, nChannels])
Y = tf.placeholder(tf.float32, [None, nClasses])
keep_prob = tf.placeholder(tf.float32)

def conv2d(x, W, b, strides=1):
    x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
    x = tf.nn.bias_add(x, b)
    return tf.nn.relu(x)


def maxpool2d(x, k=2):
    return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME')


def normalize_layer(pooling):
    #norm = tf.nn.lrn(pooling, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1')
    norm = tf.contrib.layers.batch_norm(pooling, center=True, scale=True)
    return norm


def drop_out(fc, keep_prob=0.4):
    drop_out = tf.layers.dropout(fc, rate=keep_prob)
    return drop_out


weights = {
    'WC1': tf.Variable(tf.random_normal([5, 5, 3, 32]), name='W0'),
    'WC2': tf.Variable(tf.random_normal([5*5*32, 64]), name='W1'),
    #'WD1': tf.Variable(tf.random_normal([8 * 8 * 64, 64]), name='W2'),
    #'WD2': tf.Variable(tf.random_normal([64, 128]), name='W3'),
    'out': tf.Variable(tf.random_normal([64, nClasses]), name='W5')
}

biases = {
    'BC1': tf.Variable(tf.random_normal([32]), name='B0'),
    'BC2': tf.Variable(tf.random_normal([64]), name='B1'),
    #'BD1': tf.Variable(tf.random_normal([64]), name='B2'),
    #'BD2': tf.Variable(tf.random_normal([128]), name='B3'),
    'out': tf.Variable(tf.random_normal([nClasses]), name='B5')
}

def conv_net(x, weights, biases):
    conv1 = conv2d(x, weights['WC1'], biases['BC1'])
    conv1 = maxpool2d(conv1)
    conv1 = normalize_layer(conv1)

    #conv2 = conv2d(conv1, weights['WC2'], biases['BC2'])
    #conv2 = maxpool2d(conv2)
    #conv2 = normalize_layer(conv2)

    fc1 = tf.reshape(conv1, [-1, weights['WC2'].get_shape().as_list()[0]])
    fc1 = tf.add(tf.matmul(fc1, weights['WC2']), biases['BC2'])
    fc1 = tf.nn.relu(fc1)  # Using self-normalization activation
    fc1 = drop_out(fc1)

    #fc2 = tf.add(tf.matmul(fc1, weights['WD2']), biases['BD2'])
    #fc2 = tf.nn.selu(fc2)  # Using self-normalization activation
    #fc2 = drop_out(fc2)

    out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
    out = tf.nn.softmax(out)

    return out

我认为权重字典的“ WC2”参数有问题。 它应该是'WC2': tf.Variable(tf.random_normal([16*16*32, 64]), name='W1')

应用1卷积和最大池合并操作后,您将输入图像从32 x 32 x 3下采样到16 x 16 x 3 ,现在需要展平该下采样的输出,以将其作为输入输入到完全连接的层。 这就是为什么您需要传递16*16*32

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM