简体   繁体   中英

2D convolution with reparameterization using just tf.nn.convolution

I want to do something like what tfp.layers.Conv2DReparameterization does but simpler - no priors etc.

Given an augmented input x of shape [num_particles, batch, in_height, in_width, in_channels] and a filter of mean f_mean and standard deviation f_std shape [filter_height, filter_width, in_channels, out_channels] which are trainable variables, I use the reparameterization trick to get filter samples:

filter_samples = f_mean + f_std * tf.random_normal(([num_particles] + f_mean.shape))

Thus, filter_samples is of shape [num_particles, filter_height, filter_width, in_channels, out_channels] .

Then, I want to do:

output = tf.nn.conv2d(x, filter_samples, padding='SAME') # or VALID

where output should be of shape [num_particles] + standard convolution output shape .

For dense layers, it works to just do a tf.matmul(x, filter_samples), but for conv2d I'm not sure about the results and I can't find the implementation code to check it. Implementing it myself would end up slower than TF code, so I want to avoid it.

For SAME padding, the resulting shape seems okay, for VALID the batch dim is changed making me believe it doesn't work as I expect.

Just to make it clear, I need the output to have the num_particles dim. Code is TF1.x

Any ideas on how to get that?

I think there is some code to do similar in tfp.experimental.nn . We can follow up in the github issues you filed/responded to.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM