简体   繁体   English

张量流上的形状不兼容

[英]Incompatible shapes on tensorflow

I am new in CNN,I am trying to make a CNN to classify image data set of handwritten English alphabet(az),(AZ) and numbers (0-9),which have 62 labels.Each image size is 30*30 pixels.I am following the steps as in a tutorial https://www.tensorflow.org/get_started/mnist/pros . 我是CNN的新手,我正在尝试制作一个CNN,以对具有62个标签的手写英文字母(az),(AZ)和数字(0-9)的图像数据集进行分类。每个图像尺寸为30 * 30像素我正在按照https://www.tensorflow.org/get_started/mnist/pros教程中的步骤进行操作。 When I run the model i get an error 运行模型时出现错误

tensorflow.python.framework.errors.InvalidArgumentError: Incompatible shapes: [40] vs. [10] tensorflow.python.framework.errors.InvalidArgumentError:不兼容的形状:[40]与[10]

my batch size is 10,the error appears to be at the correct_prediction. 我的批处理大小是10,错误似乎是在正确的预测下。 The solution of the same problem found in Tensorflow Incompatable Shapes Error in Tutorial ,didn't fix my problem.Any help will be appreciated. 教程中的Tensorflow不兼容形状错误中发现的相同问题的解决方案无法解决我的问题。任何帮助将不胜感激。 The data set was compressed in pickle first,here are my codes. 数据集先用咸菜压缩,这是我的代码。

import tensorflow as tf
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
X = []
y = []
import pickle

#load data from pickle
pickle_file = 'let.pickle'

with open(pickle_file, 'rb') as f:
    save = pickle.load(f)
    X = save['dataset']
    y = save['labels']

    del save  # hint to help gc free up memory

#normalise the features
X = (X - 255/2) / 255

# one hot encoding
y = pd.get_dummies(y)
y = y.values  # change to ndarray
y = np.float32(y)
X = np.float32(X)
Xtr, Xte, Ytr, Yte = train_test_split(X, y, train_size=0.7)

batch_size = 10

sess = tf.InteractiveSession()

x = tf.placeholder(tf.float32, shape=[None, 900])
y_ = tf.placeholder(tf.float32, shape=[None, 62])

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')

W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])

x_image = tf.reshape(x, [-1,30,30,1])

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

W_fc1 = weight_variable([4 * 4 * 64, 1024])
b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2, [-1, 4*4*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

W_fc2 = weight_variable([1024, 62])
b_fc2 = bias_variable([62])

y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.global_variables_initializer())
for i in range(20000):
    offset = (i * batch_size) % (Ytr.shape[0] - batch_size)
    batch_x = Xtr[offset:(offset + batch_size), :]
    batch_y = Ytr[offset:(offset + batch_size), :]
    if i%100 == 0:
        train_accuracy = accuracy.eval(feed_dict={x:batch_x , y_: batch_y, keep_prob: 1.0})
        print("step %d, training accuracy %g"%(i, train_accuracy))
    train_step.run(feed_dict={x: Xtr[offset:(offset + batch_size), :], y_: Ytr[offset:(offset + batch_size), :], keep_prob: 0.5})

print("test accuracy %g"%accuracy.eval(feed_dict={x: Xte, y_: Yte, keep_prob: 1.0}))

Take a look at these lines of code: 看一下这些代码行:

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))

If labels is a list with the index of the correct prediction (for example: [1,32,13, ...] ) the softmax_cross_entropy_with_logits function is correct. 如果标签是带有正确预测索引的列表(例如: [1,32,13, ...] ),则softmax_cross_entropy_with_logits函数正确。 This means that the error is in these lines. 这意味着这些行中有错误。 I put in a comment that they do: 我发表评论说他们这样做:

tf.argmax(y_conv,1) # Takes the max index of logits
tf.argmax(y_,1) # Takes the max index of ???. 

Although I did not test it, replacing it with this line should work: 尽管我没有对其进行测试,但是用此行替换它应该可以工作:

correct_prediction = tf.equal(tf.argmax(y_conv,1), y_)

Let me know when you fixed it :D 修复时让我知道:D

I change the dimensions of the input to the fully connected layer from 4 to 8,its working now 我将完全连接层的输入尺寸从4更改为8,现在可以正常工作

W_fc1 = weight_variable([8 * 8 * 64, 1024])
b_fc1 = bias_variable([1024])

#the input should be shaped/flattened
h_pool2_flat = tf.reshape(h_pool2, [-1, 8*8*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM