![](/img/trans.png)
[英]Why is my input and output shape from keras conv2d the same dimensions?
[英]Why does my Conv2D model compain it's not getting 4 dimensions when the shape of my input data is 4D?
我正在嘗試對卷積網絡中手寫數字的 MNIST 數據庫進行分類,但一直收到此錯誤: ValueError: Error when checking input: expected conv2d_40_input to have 4 dimensions, but got array with shape (28, 28, 1)
我的任務是使用交叉抽樣,這就是數據被分成 5 組的原因。
def train_conv_subsample():
#splitting data into chunks
chunks = []
chunk_labels = []
num_chunks = 5
chunk_size = int(train_data.shape[0]/num_chunks)
for i in range(num_chunks):
chunks.append(train_data[(i*chunk_size):(i+1)*chunk_size])
chunk_labels.append(train_labels[(i*chunk_size):(i+1)*chunk_size])
#Create another convolutional model to train.
for i in range(num_chunks):
current_train_data = []
current_train_lables = []
for j in range(num_chunks):
if(i == j):
validation_data = chunks[i]
validation_labels = chunk_labels[i]
else:
current_train_data.extend(chunks[j])
current_train_lables.extend(chunks[j])
print(np.shape(current_train_data)) #Says it has a shape of (48000,28,28, 1)
model = models.Sequential([
layers.Conv2D(16, kernel_size=(3, 3), activation='relu', input_shape=(28,28,1)),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dense(32, activation='relu'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
#But when it goes to fit it raises the error: expected 4 dim, but got array with shape (28, 28, 1)
model.fit(current_train_data, current_train_lables, epochs=1, validation_data=(validation_data, validation_labels))
tf.keras.backend.clear_session()
那是我的代碼,我使用的數據集可以從 keras 數據集,datasets.mnist.load_data() 導入
謝謝你的幫助
我認為問題在於,對於 mnist 數據集中的圖像形狀,您需要使用 numpy 數組庫中的 reshape 將它們重塑為 4 個暗淡的數組,如下所示:
import numpy as np
np.reshape(dataset,(-1,28,28,1)
如果這不起作用嘗試在使用 OpenCV 庫重塑之前將它們轉換為灰度
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.