简体   繁体   English

Keras变分自动编码器示例 - 潜在输入的使用

[英]Keras variational autoencoder example - usage of latent input

I'm new to Keras, and have been struggling with understanding the usage of the variable z in the variational autoencoder example in their official github . 我是Keras的新手,并且在他们的官方github的变量autoencoder示例中一直在努力理解变量z的用法。 I don't understand why z is not being used instead of the variable latent_inputs . 我不明白为什么不使用z代替变量latent_inputs I ran the code and it seems to work, but I don't understand if z is being used behind the scenes and what is the mechanism in Keras that is responsible for it. 我运行代码,它似乎工作,但我不明白z是否在幕后使用,以及Keras负责它的机制是什么。 Here is the relevant code snippet: 以下是相关的代码段:

# VAE model = encoder + decoder
# build encoder model
inputs = Input(shape=input_shape, name='encoder_input')
x = Dense(intermediate_dim, activation='relu')(inputs)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)

# use reparameterization trick to push the sampling out as input
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])

# instantiate encoder model
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')
encoder.summary()
plot_model(encoder, to_file='vae_mlp_encoder.png', show_shapes=True)

# build decoder model
latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
x = Dense(intermediate_dim, activation='relu')(latent_inputs)
outputs = Dense(original_dim, activation='sigmoid')(x)

# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()
plot_model(decoder, to_file='vae_mlp_decoder.png', show_shapes=True)

# instantiate VAE model
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs, name='vae_mlp')

Your encoder is defined as a model that takes inputs inputs and gives outputs [z_mean, z_log_var, z] . 您的encoder被定义为一个模型,它接受输入inputs并给出输出[z_mean, z_log_var, z] You then define your decoder separately to take some input, here called latent_inputs , and output outputs . 然后,您可以单独定义解码器以获取一些输入,此处称为latent_inputs和输出outputs Finally, your overall model is defined in the line that states: 最后,您的整体模型在以下行中定义:

outputs = decoder(encoder(inputs)[2])

This means you are going to run encoder on your inputs , which yields [z_mean, z_log_var, z] , and then the third element of that (call it result[2] ) gets passed in as the input argument to decoder . 这意味着你将在你的inputs上运行encoder ,产生[z_mean, z_log_var, z] ,然后它的第三个元素(称为result[2] )作为decoder的输入参数传入。 In other words, when you implement your network, you are setting latent_inputs equal to the third output of your encoder, or [z_mean, z_log_var, z][2] = z . 换句话说,当您实现网络时,您将latent_inputs设置latent_inputs等于编码器的第三个输出,或者[z_mean, z_log_var, z][2] = z You could view it as (probably not valid code): 您可以将其视为(可能不是有效的代码):

encoder_outputs = encoder(inputs)  # [z_mean, z_log_var, z]
outputs = decoder(latent_inputs=encoder_outputs[2])  # latent_inputs = z

They are just defining separately the encoder and decoder, so that they can be used individually: 它们只是单独定义编码器和解码器,因此它们可以单独使用:

  • Given some inputs , encoder computes their latent vectors / lower representations z_mean, z_log_var, z (you could use the encoder by itself eg to store those lower-dimensional representations, or for easier comparison). 给定一些inputsencoder计算它们的潜在向量/低表示z_mean, z_log_var, z (您可以z_mean, z_log_var, z使用encoder ,例如存储那些低维表示,或者为了更容易比较)。

  • Given such a lower-dimensional representation latent_inputs , decoder returns the decoded information outputs (eg if you need to reuse the stored lower representations). 给定这样的较低维度表示latent_inputsdecoder返回解码的信息outputs (例如,如果您需要重用存储的较低表示)。

To train/use the complete VAE, both operation can just be chained the way they are actually doing: outputs = decoder(encoder(inputs)[2]) ( latent_inputs of decoder receiving the z output of encoder ). 为了训练/使用完整的VAE,两种操作都可以按照实际的方式进行链接: outputs = decoder(encoder(inputs)[2])decoder latent_inputs接收encoderz输出)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM