简体   繁体   中英

Using tf.split or tf.slice for keras layers

I want to split my training samples of size [batchSize, 2, 16] into 16 tensors of size [batchSize, 2] and input them to the same model. How can I accomplish them in keras?

I have first implemented this in following way,

def functionA(x_Config):

    y = layers.Input(shape=(2,16,))

    hidden1 = 5
    hidden2 = 10

    x_input = layers.Input(shape=(2,))
    hidden_layer1 = Dense(hidden1, activation='relu')(x_input)
    hidden_layer2 = Dense(hidden2, activation='relu')(hidden_layer1)
    x_output = Dense(x_Config.m, activation='linear')(hidden_layer2)
    model_x= Model(inputs=x_input, outputs=x_output)


    for i in range(16):
        x_input = Lambda(lambda x: x[:, :, i])(y)

        if i == 0:
           x_output = model_x(x_input)
        else:
            x_output = layers.concatenate([x_output, 
                                      model_x(x_input)])

    x_output = Lambda(lambda x: x[:, :tf.cast(N, tf.int32)])(x_output)

    final_model = Model(y, x_output)

    return final_model

PS: I have a trained model for the same NN architecture for model_x for [batchSize, 2] inputs (without the need for splitting). This model performs very well. When I tried to load the weights of that model to above mentioned code, model_x , it does not give good performance at all and does not train well.

So I believe my problems lies inside the loop in the Lamda layer. How can I use tf.split or tf.slice for this?

您可以使用tf.unstack函数:

tensors = tf.unstack(data, axis=2)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM