简体   繁体   中英

Reproduce normalization by tensorflow to keras

actually i'm trying to reproduce a tensorflow model on keras, i'm really new on this topic. I would like to reproduce those lines

embedding = tf.layers.conv2d(conv6, 128, (16, 16), padding='VALID', name='embedding')
embedding = tf.reshape(embedding, (-1, 128))
embedding = embedding - tf.reduce_min(embedding, keepdims =True)
z_n = embedding/tf.reduce_max(embedding, keepdims =True)

my actual code is:

def conv_conv_pool(n_filters,
                   name,
                   pool=True,
                   activation=tf.nn.relu, padding='same', filters=(3,3)):
    """{Conv -> BN -> RELU}x2 -> {Pool, optional}
    Args:
        input_ (4-D Tensor): (batch_size, H, W, C)
        n_filters (list): number of filters [int, int]
        training (1-D Tensor): Boolean Tensor
        name (str): name postfix
        pool (bool): If True, MaxPool2D
        activation: Activaion functions
    Returns:
        net: output of the Convolution operations
        pool (optional): output of the max pooling operations
    """
    net = Sequential()
    for i, F in enumerate(n_filters):
        conv = Conv2D(
            filters = F,
            kernel_size = (3,3),
            padding = 'same',
            )
        net.add(conv)
        batch_norm = BatchNormalization()
        net.add(batch_norm)
        net.add(Activation('relu'))

    if pool is False:
        return net

    pool = Conv2D(
        filters = F,
        kernel_size = (3,3),
        strides = (2,2),
        padding = 'same',  
        )
    net.add(pool)
    batch_norm = BatchNormalization()
    net.add(batch_norm)
    net.add(Activation('relu'))
    return net


def model_keras():
    model = Sequential()
    model.add(conv_conv_pool(n_filters = [8, 8], name="1"))
    model.add(conv_conv_pool([32, 32], name="2"))
    model.add(conv_conv_pool([32, 32], name="3"))
    model.add(conv_conv_pool([64, 64], name="4"))
    model.add(conv_conv_pool([64, 64], name="5"))
    model.add(conv_conv_pool([128, 128], name="6", pool=False))
    return model

The normalization should be after layer 6.

I was thinking to use the lambda layer, is this correct? If yes how should I write it?

I believe you want to switch to tensorflow 2 which uses keras as the API. You will need to install/upgrade to tensorflow 2, then you could try this:

import tensorflow as tf

embedding = tf.keras.layers.conv2d(conv6, 128, (16, 16), padding='VALID', 
            name='embedding')
embedding = tf.keras.layers.reshape(embedding, (-1, 128))
embedding = embedding - tf.math.reduce_min(embedding, keepdims =True)
z_n = embedding/tf.math.reduce_max(embedding, keepdims =True)

If you want to use the keras layer api you can create a custom layer, you can find the documentation how to do it herehttps://www.tensorflow.org/guide/keras/custom_layers_and_models , you should end with something like this:

class NormalizationLayer(layers.Layer):

  def __init__(self, filters=128):
    super(NormalizationLayer, self).__init__()
    self.filters = filters

  def call(self, inputs):
    embedding = tf.keras.layers.conv2d(inputs, self.filters, (16, 16), padding='VALID', 
            name='embedding')
    embedding = tf.keras.layers.reshape(embedding, (-1, self.filters))
    embedding = embedding - tf.math.reduce_min(embedding, keepdims =True)
    z_n = embedding/tf.math.reduce_max(embedding, keepdims =True)
    return zn

I use the normalization you introduced inside a Lambda layer. I also made a correction (min and max are calculated on the same input and not one on input and the other on the transformation), but you can also change it. norm_original normalize a 4D input with min and max calculated on ALL the channels and try to return a 2D output with a fixed number of features this will produce an error because you are modifying the batch dimension

def norm_original(inp):

    embedding = tf.reshape(inp, (-1, inp.shape[-1]))
    embedding = embedding - tf.reduce_min(inp)
    embedding = embedding / tf.reduce_max(inp)

    return embedding

inp = Input((28,28,3))
x = Conv2D(128, 3, padding='same')(inp)
x = Lambda(norm_original)(x)

m = Model(inp, x)
m.compile('adam', 'mse')
m.summary()

X = np.random.uniform(0,1, (10,28,28,3))
y = np.random.uniform(0,1, (10,128))

m.fit(X,y, epochs=3) # error

to avoid this error I propose two possibilities. I also made a change to operate a normalization by channel (I retain it more appropriate) but you can also modify it.

1) you can normalize the 4D input with min/max and then flatten the output putting all on the last dimension. this solution doesn't alternate the batch dim

def norm(inp):
    ## this function operate normalization by channels
    embedding = inp - tf.reduce_min(inp, keepdims=True, axis=[0,1,2])
    embedding = embedding / tf.reduce_max(inp, keepdims=True, axis=[0,1,2])

    return embedding

inp = Input((28,28,3))
x = Conv2D(128, 3, padding='same')(inp)
x = Lambda(norm)(x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)

m = Model(inp, x)
m.compile('adam', 'mse')

X = np.random.uniform(0,1, (10,28,28,3))
y = np.random.uniform(0,1, (10,128))

m.fit(X,y, epochs=3)

2) you can use a GlobalPooling layer to reduce the 4D dimension and reconduct to a 2D shape, preserving the feature dimension

inp = Input((28,28,3))
x = Conv2D(128, 3, padding='same')(inp)
x = Lambda(norm)(x)
x = GlobalMaxPool2D()(x) # u can also use GlobalAveragePooling2D

m = Model(inp, x)
m.compile('adam', 'mse')

X = np.random.uniform(0,1, (10,28,28,3))
y = np.random.uniform(0,1, (10,128))

m.fit(X,y, epochs=3)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM