简体   繁体   中英

how to create keras conv2d layer on grayscale image set

i have created this NN

#Encoder
encoder_input = Input(shape=(1,height, width))
encoder_output = Conv2D(64, (3,3), activation='relu', padding='same', strides=2)(encoder_input)
encoder_output = Conv2D(128, (3,3), activation='relu', padding='same')(encoder_output)
encoder_output = Conv2D(128, (3,3), activation='relu', padding='same', strides=2)(encoder_output)
encoder_output = Conv2D(256, (3,3), activation='relu', padding='same')(encoder_output)
encoder_output = Conv2D(256, (3,3), activation='relu', padding='same', strides=2)(encoder_output)
encoder_output = Conv2D(512, (3,3), activation='relu', padding='same')(encoder_output)
encoder_output = Conv2D(512, (3,3), activation='relu', padding='same')(encoder_output)
encoder_output = Conv2D(256, (3,3), activation='relu', padding='same')(encoder_output)
#Decoder
decoder_output = Conv2D(128, (3,3), activation='relu', padding='same')(encoder_output)
decoder_output = UpSampling2D((2, 2))(decoder_output)
decoder_output = Conv2D(64, (3,3), activation='relu', padding='same')(decoder_output)
decoder_output = UpSampling2D((2, 2))(decoder_output)
decoder_output = Conv2D(32, (3,3), activation='relu', padding='same')(decoder_output)
decoder_output = Conv2D(16, (3,3), activation='relu', padding='same')(decoder_output)
decoder_output = Conv2D(2, (3, 3), activation='tanh', padding='same')(decoder_output)
decoder_output = UpSampling2D((2, 2))(decoder_output)
model = Model(inputs=encoder_input, outputs=decoder_output)
model.compile(optimizer='adam', loss='mse' , metrics=['accuracy'])
clean_images = model.fit(train_images,y_train_red, epochs=200)

and train images is created by

train_images = np.array([ImageOperation.resizeImage(cv2.imread(train_path + str(i) + ".jpg"), height, width) for i in
                range(train_size)])

y_train_red = [img[:, :, 2]/255 for img in train_images]

train_images = np.array([ImageOperation.grayImg(item) for item in train_images])

and when i execute the code i recieved the following error

Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (10, 200, 200) how to solve it?

Your images are 2D (Height x Width), whereas it expects 3D images. Reshape your images to add additional dimension such as,

train_images = train_images.reshape(train_size, height, width, 1)

as the documentation says: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D

you need a 4 dimensional input for Conv2d layer. you have to a add a channel either after or before 2 main dimensions of the image:

train_images = train_images.reshape(train_size, height, width, 1)

or

train_images = train_images.reshape(train_size, 1, height, width)

in both cases you have to define the art of input in every layer in the network with data_format="channels_first" or data_format="channels_last" .

for example:

ncoder_output = Conv2D(64, (3,3), activation='relu', padding='same', strides=2, data_format="channels_last")(encoder_input)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM