[英]Autoencoder input shape: expected input_1 to have shape (256, 256, 3) but got array with shape (256, 256, 4)
I am trying to build an auto-encoder but I get the following error and I can't why.我正在尝试构建一个自动编码器,但我收到以下错误,我不知道为什么。
ValueError: Error when checking input: expected input_1 to have shape (256, 256, 3) but got array with shape (256, 256, 4) ValueError:检查输入时出错:预期 input_1 的形状为 (256, 256, 3) 但得到的数组的形状为 (256, 256, 4)
If I print image shape I get (256, 256, 3) but I still get an error regarding shape.如果我打印图像形状,我会得到 (256, 256, 3) 但我仍然会收到关于形状的错误。
Any help would be fantastic.任何帮助都会很棒。
Ubuntu 18.04 | Ubuntu 18.04 | Python 3.7.6 |
Python 3.7.6 | Tensorflow 2.1
Tensorflow 2.1
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, Dropout, Conv2DTranspose, UpSampling2D, add
from tensorflow.keras.models import Model
from tensorflow.keras import regularizers
import os
import re
from scipy import ndimage, misc
from skimage.transform import resize, rescale
from matplotlib import pyplot
import numpy as np
#Functions
def train_batches(just_load_dataset=False):
batches = 256 # Number of images to have at the same time in a batch
batch = 0 # Number if images in the current batch (grows over time and then resets for each batch)
batch_nb = 0 # Batch current index
max_batches = -1 # If you want to train only on a limited number of images to finish the training even faster.
ep = 4 # Number of epochs
images = []
x_train_n = []
x_train_down = []
x_train_n2 = [] # Resulting high res dataset
x_train_down2 = [] # Resulting low res dataset
for root, dirnames, filenames in os.walk(input_dir):
for filename in filenames:
if re.search("\.(jpg|jpeg|JPEG|png|bmp|tiff)$", filename):
if batch_nb == max_batches: # If we limit the number of batches, just return earlier
return x_train_n2, x_train_down2
filepath = os.path.join(root, filename)
image = pyplot.imread(filepath)
if len(image.shape) > 2:
image_resized = resize(image, (256, 256))
x_train_n.append(image_resized)
x_train_down.append(rescale(rescale(image_resized, 0.5), 2.0))
batch += 1
if batch == batches:
batch_nb += 1
x_train_n2 = np.array(x_train_n)
x_train_down2 = np.array(x_train_down)
if just_load_dataset:
return x_train_n2, x_train_down2
print('Training batch', batch_nb, '(', batches, ')')
autoencoder.fit(x_train_down2, x_train_n2, epochs=ep, batch_size=10, shuffle=True, validation_split=0.15)
x_train_n = []
x_train_down = []
batch = 0
return x_train_n2, x_train_down2
#Script
input_dir="/mnt/vanguard/datasets/ffhq-dataset/thumbnails256x256"
n = 256
chan = 3
input_img = Input(shape=(n, n, chan))
# Encoder
l1 = Conv2D(64, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(input_img)
l2 = Conv2D(64, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l1)
l3 = MaxPooling2D(padding='same')(l2)
l3 = Dropout(0.3)(l3)
l4 = Conv2D(128, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l3)
l5 = Conv2D(128, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l4)
l6 = MaxPooling2D(padding='same')(l5)
l7 = Conv2D(256, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l6)
# Decoder
l8 = UpSampling2D()(l7)
l9 = Conv2D(128, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l8)
l10 = Conv2D(128, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l9)
l11 = add([l5, l10])
l12 = UpSampling2D()(l11)
l13 = Conv2D(64, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l12)
l14 = Conv2D(64, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l13)
l15 = add([l14, l2])
#chan = 3, for RGB
decoded = Conv2D(chan, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l15)
# Create neural network
autoencoder = Model(input_img, decoded)
autoencoder_hfenn = Model(input_img, decoded)
autoencoder.summary()
autoencoder.compile(optimizer='adadelta', loss='mean_squared_error')
x_train_n = []
x_train_down = []
x_train_n, x_train_down = train_batches()
The rescaling x_train_down.append(rescale(rescale(image_resized, 0.5), 2.0))
is causing the problem.重新缩放
x_train_down.append(rescale(rescale(image_resized, 0.5), 2.0))
导致问题。 OpenCV can be used for degrading the image quality: OpenCV 可用于降低图像质量:
small = cv2.resize(image_resized, (0,0), fx=0.5, fy=0.5)
large = cv2.resize(small, (0,0), fx=2.0, fy=2.0)
Also, note that this is GPU-intensive computation.另请注意,这是 GPU 密集型计算。 Either the image size should be reduced, or try GPU with more memory (K80).
要么减小图像尺寸,要么尝试使用更多 memory (K80) 的 GPU。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.