简体   繁体   中英

Tensorflow LSTM: How to use different weights for each batch?

I'm talking about the tf.keras.layers.LSTM implementation, as I want to use cuDNN for my batched LSTM.

Right now, I use a "hand made" LSTM implementation, because I want to have different weights/biases for each batch. Do you know a way how to use TensorFlows LSTM implementation of the LSTM with a unique set of weights/biases for each batch?

Maybe you can use something like this. It is an example for a fully-connected layer for a CNN

def dense_fc4(n_objects):
        initializer = lambda: tf.contrib.layers.xavier_initializer()(shape=(1024,512))
        return tf.Variable(initial_value=initializer, name='fc4/kernel',
                           shape=(n_objects.shape[0], 1024, 512))


    W4 = tf.map_fn(dense_fc4, samples_flat)
    b4 = tf.get_variable('fc4/bias', shape=512, initializer=tf.zeros_initializer())
    fc4 = tf.add(tf.matmul(samples_flat, W4), b4)
    fc4 = tf.nn.relu(fc4)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM