简体   繁体   中英

Keras variational autoencoder example - usage of latent input

I'm new to Keras, and have been struggling with understanding the usage of the variable z in the variational autoencoder example in their official github . I don't understand why z is not being used instead of the variable latent_inputs . I ran the code and it seems to work, but I don't understand if z is being used behind the scenes and what is the mechanism in Keras that is responsible for it. Here is the relevant code snippet:

# VAE model = encoder + decoder
# build encoder model
inputs = Input(shape=input_shape, name='encoder_input')
x = Dense(intermediate_dim, activation='relu')(inputs)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)

# use reparameterization trick to push the sampling out as input
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])

# instantiate encoder model
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')
encoder.summary()
plot_model(encoder, to_file='vae_mlp_encoder.png', show_shapes=True)

# build decoder model
latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
x = Dense(intermediate_dim, activation='relu')(latent_inputs)
outputs = Dense(original_dim, activation='sigmoid')(x)

# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()
plot_model(decoder, to_file='vae_mlp_decoder.png', show_shapes=True)

# instantiate VAE model
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs, name='vae_mlp')

Your encoder is defined as a model that takes inputs inputs and gives outputs [z_mean, z_log_var, z] . You then define your decoder separately to take some input, here called latent_inputs , and output outputs . Finally, your overall model is defined in the line that states:

outputs = decoder(encoder(inputs)[2])

This means you are going to run encoder on your inputs , which yields [z_mean, z_log_var, z] , and then the third element of that (call it result[2] ) gets passed in as the input argument to decoder . In other words, when you implement your network, you are setting latent_inputs equal to the third output of your encoder, or [z_mean, z_log_var, z][2] = z . You could view it as (probably not valid code):

encoder_outputs = encoder(inputs)  # [z_mean, z_log_var, z]
outputs = decoder(latent_inputs=encoder_outputs[2])  # latent_inputs = z

They are just defining separately the encoder and decoder, so that they can be used individually:

  • Given some inputs , encoder computes their latent vectors / lower representations z_mean, z_log_var, z (you could use the encoder by itself eg to store those lower-dimensional representations, or for easier comparison).

  • Given such a lower-dimensional representation latent_inputs , decoder returns the decoded information outputs (eg if you need to reuse the stored lower representations).

To train/use the complete VAE, both operation can just be chained the way they are actually doing: outputs = decoder(encoder(inputs)[2]) ( latent_inputs of decoder receiving the z output of encoder ).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM