简体   繁体   中英

Autoencoder design for image enhancement

I am trying to train a simple autoencoder for performing image enhancement task. I have a dataset consisting of normal and enhanced images (ground truth). I used Keras documentation to build such a model, but I get Memory error (Unable to allocate 768. KiB for an array with shape (1, 256, 256, 3) and data type float32) after running the code. Can anyone help me to adjust the code in a way to solve this issue?

from keras.models import Model
from keras.layers import Conv2D, concatenate, Input, Add, MaxPooling2D, UpSampling2D
from keras.preprocessing.image import ImageDataGenerator
import numpy as np

data_gen_args = dict(
                     rescale=1. / 255,
                    )

image_datagen = ImageDataGenerator(**data_gen_args)
gt_datagen = ImageDataGenerator(**data_gen_args)

train_it = image_datagen.flow_from_directory('../data/train/images', class_mode='input', batch_size=1)
test_it = image_datagen.flow_from_directory('../data/test/images', class_mode='input', batch_size=1)

train_it_gt = gt_datagen.flow_from_directory('../data/train/ground_truth', class_mode='input', batch_size=1)
test_it_gt = gt_datagen.flow_from_directory('../data/test/ground_truth', class_mode='input', batch_size=1)

input_img = Input(shape=(640, 480, 3))

x = Conv2D(4, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)

x = Conv2D(4, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(4, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)

autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

autoencoder.fit_generator(np.array(train_it), np.array(train_it_gt),
                epochs=10,
                shuffle=True,
                validation_data=(np.array(test_it), np.array(test_it_gt)),
                callbacks=[])

autoencoder.save("autoencoder_model")

Basically you are running out of memory.By default steps_per_epoch and validation_steps are None,which means keras take batch size as 1.It is used to define how many batches of samples to use in one epoch.

from keras documentation

steps_per_epoch: The generatortotal number of steps (the number of sample batches) to use before the end of one epoch and the start of the next epoch. If you divide the data size by the batch size, it usually equals the number of unique samples. SequenceOptions for: If not specified, is len(generator)used as the number of steps.

validation_steps: validation_dataonly relevant if is a generator. The generatortotal number of steps to use from the end (the number of sample batches). SequenceOptions for: If not specified, is len(validation_data)used as the number of steps.

You can define steps_per_epoch and validation_steps parameter when you have huge amount of data.

Refer these links

Choosing number of Steps per Epoch

https://androidkt.com/how-to-set-steps-per-epoch-validation-steps-and-validation-split-in-kerass-fit-method/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM