简体   繁体   中英

Keras Autoencoder Input Image Size

Consider this Autoencoder:

import numpy as np

from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D, Flatten, Reshape
from keras.models import Model

class ConvAutoencoder:

    def __init__(self, image_size, latent_dim):

        inp = Input(shape=(image_size[0], image_size[1], 1))

        x = Conv2D(16, (3, 3), activation='relu', padding='same')(inp)
        x = MaxPooling2D((2, 2), padding='same')(x)
        x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
        x = MaxPooling2D((2, 2), padding='same')(x)
        x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
        encoded = MaxPooling2D((2, 2), padding='same')(x)
        # at this point the representation is (4, 4, 8) i.e. 128-dimensional

        d = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
        d = UpSampling2D((2, 2))(d)
        d = Conv2D(8, (3, 3), activation='relu', padding='same')(d)
        d = UpSampling2D((2, 2))(d)
        d = Conv2D(16, (3, 3), activation='relu')(d)
        d = UpSampling2D((2, 2))(d)

        decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(d)

        self.model = Model(inp, decoded)
        self.encoder = Model(inp, encoded)
        self.model.compile(loss='mse', optimizer='Adam')

        print(self.model.summary())

I instantiate it with

ConvAutoencoder(image_size=(32,32), latent_dim=10)

which prints

Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 32, 32, 1)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 32, 32, 16)        160       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 16)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 16, 16, 8)         1160      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 8)           0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 8, 8, 8)           584       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 4, 4, 8)           0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 4, 4, 8)           584       
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 8, 8, 8)           0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 8, 8, 8)           584       
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 16, 16, 8)         0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 14, 14, 16)        1168      
_________________________________________________________________
up_sampling2d_3 (UpSampling2 (None, 28, 28, 16)        0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 28, 28, 1)         145       
=================================================================
Total params: 4,385
Trainable params: 4,385
Non-trainable params: 0
_________________________________________________________________
None

As you can see, the input image size is (32,32) but the output image size is (28,28) .
* Question 1: How can I change the architecture of the autoencoder such that the output image size becomes (32,32) ?
* Question 2: As you can see, the class expects an argument called latent_dim . Currently, this argument is unused. Is there an easy way of "forcing" the autoencoder's latent dimensions down to a certain number? Eg adding a fully connected layer in the middle or something along those lines?

Question 1

Well, you forget a padding='same' in the last upsampling.

It should looks like this

        # at this point the representation is (4, 4, 8) i.e. 128-dimensional

        d = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
        d = UpSampling2D((2, 2))(d)
        d = Conv2D(8, (3, 3), activation='relu', padding='same')(d)
        d = UpSampling2D((2, 2))(d)
        d = Conv2D(16, (3, 3), activation='relu', padding='same')(d)
        d = UpSampling2D((2, 2))(d)

        decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(d)

Question 2

Do you mean the kernel? Then what about

        x = Conv2D(latent_dim*4, (3, 3), activation='relu', padding='same')(inp)
        x = MaxPooling2D((2, 2), padding='same')(x)
        x = Conv2D(latent_dim*2, (3, 3), activation='relu', padding='same')(x)
        x = MaxPooling2D((2, 2), padding='same')(x)
        x = Conv2D(latent_dim, (3, 3), activation='relu', padding='same')(x)
        encoded = MaxPooling2D((2, 2), padding='same')(x)
        # at this point the representation is (4, 4, 8) i.e. 128-dimensional

        d = Conv2D(latent_dim, (3, 3), activation='relu', padding='same')(encoded)
        d = UpSampling2D((2, 2))(d)
        d = Conv2D(latent_dim*2, (3, 3), activation='relu', padding='same')(d)
        d = UpSampling2D((2, 2))(d)
        d = Conv2D(latent_dim*4, (3, 3), activation='relu', padding='same')(d)
        d = UpSampling2D((2, 2))(d)

But if you meant you want the middle layer to has a specific kernel size then you can replace the MaxPooling2D to Conv2D with stride like this.

encoded = Conv2D(latent_dim, (3, 3), activation='relu', padding='same', strides=2)(x)

Actually you can remove all the Maxpooling2D and add the strides=2 to all the Conv2D .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM