I'm experimenting with an older piece of code that creates a very basic encoder
def make_encoder(data, code_size):
x = tf.layers.flatten(data)
x = tf.layers.dense(x, 200, tf.nn.relu)
x = tf.layers.dense(x, 200, tf.nn.relu)
loc = tf.layers.dense(x, code_size)
scale = tf.layers.dense(x, code_size, tf.nn.softplus)
return tfd.MultivariateNormalDiag(loc, scale)
I'm trying to migrate this code to Tensorflow 2 due to depreciation of the tf.layer.dense, etc. I'm not very familiar with how tf.keras.layers could implement the above, but I was able to get this working:
def make_encoder(data, code_size):
model = Sequential()
model.add(Flatten())
model.add(Dense(200, activation='relu'))
model.add(Dense(200, activation='relu'))
x = model(data)
loc = model
scale = model
loc.add(Dense(code_size))
scale.add(Dense(code_size, activation='softplus'))
loc = loc(data)
scale = scale(data)
return tfd.MultivariateNormalDiag(loc, scale)
When I run the program, I get very different/worse results compared to before. I'm certain I'm doing something wrong/am going about this the wrong way.
It is suggested to use the functional API for defining complex models, such as multi-output models, directed acyclic graphs, or models with shared layers.
Your code have to be something like this:
def Encoder(data, code_size):
inputs = Input(shape=(data.shape[1:]))
x = Flatten()(inputs)
x = Dense(200, activation='relu')(x)
x = Dense(200, activation='relu')(x)
loc = Dense(code_size)(x)
scale = Dense(code_size, activation='softplus')(x)
return Model(inputs=inputs,ouputs=[loc,scale])
def make_encoder(data, code_size):
loc,scale = Encoder(data, code_size)
return tfd.MultivariateNormalDiag(loc, scale)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.