简体   繁体   中英

PCA Implementation on a Convolutional Neural Network

After applying PCA on MNIST data, I identified CNN model and layers. After fitting CNN model (X_train_PCA, Y_train) I end up with dimension problem at evaluation phase. Here is the message "ValueError: Error when checking input: expected conv2d_1_input to have shape (1, 10, 10) but got array with shape (1, 28, 28)". When I try to reshape X_test into 10X10 format, I got a very low score

First I applied min-max regularization, and then PCA to X_train. Then, I produced validation data from X_train. The problem is; I can fit the data in 100 dimension format(after applying PCA), my input data becomes 10X10. When I try to get score from fitted model using X_test which is still (10000, 1, 28, 28)). I get an errors as mentioned above. How can I solve dimension problem. I also tried to transform X_test with minmaxscaler and PCA. No change in score

pca_3D = PCA(n_components=100)
X_train_pca = pca_3D.fit_transform(X_train)
X_train_pca.shape  
cnn_model_1_scores = cnn_model_1.evaluate(X_test, Y_test, verbose=0)

# Split the data into training, validation and test sets
X_train1 = X_pca_proj_3D[:train_size]
X_valid = X_pca_proj_3D[train_size:]
Y_train1 = Y_train[:train_size]
Y_valid = Y_train[train_size:]

# We need to convert the input into (samples, channels, rows, cols) format
X_train1 = X_train1.reshape(X_train1.shape[0], 1, 10, 
10).astype('float32')
X_valid = X_valid.reshape(X_valid.shape[0], 1, 10, 10).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32')
X_train1.shape, X_valid.shape, X_test.shape  
((51000, 1, 10, 10), (9000, 1, 10, 10), (10000, 1, 28, 28))

#create model
cnn_model_1=Sequential()

#1st Dense Layer
cnn_model_1.add(Conv2D(32, kernel_size=(5,5),
                  data_format="channels_first",
                  input_shape=(1,10,10),
                  activation='relu'))
#Max-Pooling
cnn_model_1.add(MaxPooling2D(pool_size=(2,2)))
#Max pooling is a sample-based discretization process. The objective is to 
down-sample an input representation (image, hidden-layer output matrix, 
etc.), reducing its dimensionality
# the number of layers, remains unchanged in the pooling operation
#cnn_model_1.add(BatchNormalization()) 
#Dropout
cnn_model_1.add(Flatten())
#cnn_model_1.add(BatchNormalization()) 
#2nd Dense Layer
cnn_model_1.add(Dense(128, activation='relu'))

#final softmax layer
cnn_model_1.add(Dense(10, activation='softmax'))

# print a summary and check if you created the network you intended
cnn_model_1.summary()

#Compile Model
cnn_model_1.compile(loss='categorical_crossentropy', optimizer='adam', 
     metrics=['accuracy'])

 #Fit the model
cnn_model_1_history=cnn_model_1.fit(X_train1, Y_train1, 
validation_data=(X_valid, Y_valid), epochs=5, batch_size=100, verbose=2)

# Final evaluation of the model
cnn_model_1_scores = cnn_model_1.evaluate(X_test, Y_test, verbose=0)
print("Baseline Test Accuracy={0:.2f}%   (categorical_crossentropy) loss= 
{1:.2f}".format(cnn_model_1_scores[1]*100, cnn_model_1_scores[0]))
cnn_model_1_scores

I solved the problem, updating the post to give intuition for other coders to debug their code. First, I applied PCA on X_test data and after getting low score I tried without applying. As @Scott suggested, this was wrong. After carefully checking my code, I saw that I forgot to change X_test to X_test_pca after applying PCA on test data while constructing CNN model. I also fitted PCA on X_train while applying PCA on X_test data.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM