繁体   English   中英

通过 tensorflow 到 keras 再现标准化

[英]Reproduce normalization by tensorflow to keras

实际上,我正在尝试在 keras 上重现 tensorflow model,我对这个话题真的很陌生。 我想重现那些行

embedding = tf.layers.conv2d(conv6, 128, (16, 16), padding='VALID', name='embedding')
embedding = tf.reshape(embedding, (-1, 128))
embedding = embedding - tf.reduce_min(embedding, keepdims =True)
z_n = embedding/tf.reduce_max(embedding, keepdims =True)

我的实际代码是:

def conv_conv_pool(n_filters,
                   name,
                   pool=True,
                   activation=tf.nn.relu, padding='same', filters=(3,3)):
    """{Conv -> BN -> RELU}x2 -> {Pool, optional}
    Args:
        input_ (4-D Tensor): (batch_size, H, W, C)
        n_filters (list): number of filters [int, int]
        training (1-D Tensor): Boolean Tensor
        name (str): name postfix
        pool (bool): If True, MaxPool2D
        activation: Activaion functions
    Returns:
        net: output of the Convolution operations
        pool (optional): output of the max pooling operations
    """
    net = Sequential()
    for i, F in enumerate(n_filters):
        conv = Conv2D(
            filters = F,
            kernel_size = (3,3),
            padding = 'same',
            )
        net.add(conv)
        batch_norm = BatchNormalization()
        net.add(batch_norm)
        net.add(Activation('relu'))

    if pool is False:
        return net

    pool = Conv2D(
        filters = F,
        kernel_size = (3,3),
        strides = (2,2),
        padding = 'same',  
        )
    net.add(pool)
    batch_norm = BatchNormalization()
    net.add(batch_norm)
    net.add(Activation('relu'))
    return net


def model_keras():
    model = Sequential()
    model.add(conv_conv_pool(n_filters = [8, 8], name="1"))
    model.add(conv_conv_pool([32, 32], name="2"))
    model.add(conv_conv_pool([32, 32], name="3"))
    model.add(conv_conv_pool([64, 64], name="4"))
    model.add(conv_conv_pool([64, 64], name="5"))
    model.add(conv_conv_pool([128, 128], name="6", pool=False))
    return model

标准化应该在第 6 层之后。

我在考虑使用 lambda 层,这是正确的吗? 如果是,我应该怎么写?

我相信您想切换到使用 keras 作为 API 的 tensorflow 2。 你需要安装/升级到 tensorflow 2,然后你可以试试这个:

import tensorflow as tf

embedding = tf.keras.layers.conv2d(conv6, 128, (16, 16), padding='VALID', 
            name='embedding')
embedding = tf.keras.layers.reshape(embedding, (-1, 128))
embedding = embedding - tf.math.reduce_min(embedding, keepdims =True)
z_n = embedding/tf.math.reduce_max(embedding, keepdims =True)

If you want to use the keras layer api you can create a custom layer, you can find the documentation how to do it herehttps://www.tensorflow.org/guide/keras/custom_layers_and_models , you should end with something like this:

class NormalizationLayer(layers.Layer):

  def __init__(self, filters=128):
    super(NormalizationLayer, self).__init__()
    self.filters = filters

  def call(self, inputs):
    embedding = tf.keras.layers.conv2d(inputs, self.filters, (16, 16), padding='VALID', 
            name='embedding')
    embedding = tf.keras.layers.reshape(embedding, (-1, self.filters))
    embedding = embedding - tf.math.reduce_min(embedding, keepdims =True)
    z_n = embedding/tf.math.reduce_max(embedding, keepdims =True)
    return zn

我使用您在 Lambda 层中引入的规范化。 我还进行了更正(最小值和最大值是在相同的输入上计算的,而不是一个在输入上,另一个在转换上),但您也可以更改它。 norm_original对 4D 输入进行归一化,并在所有通道上计算最小值和最大值,并尝试返回具有固定数量特征的 2D output 这将产生错误,因为您正在修改批量维度

def norm_original(inp):

    embedding = tf.reshape(inp, (-1, inp.shape[-1]))
    embedding = embedding - tf.reduce_min(inp)
    embedding = embedding / tf.reduce_max(inp)

    return embedding

inp = Input((28,28,3))
x = Conv2D(128, 3, padding='same')(inp)
x = Lambda(norm_original)(x)

m = Model(inp, x)
m.compile('adam', 'mse')
m.summary()

X = np.random.uniform(0,1, (10,28,28,3))
y = np.random.uniform(0,1, (10,128))

m.fit(X,y, epochs=3) # error

为了避免这个错误,我提出了两种可能性。 我还进行了更改以按通道操作规范化(我保留它更合适),但您也可以修改它。

1)您可以使用最小值/最大值标准化 4D 输入,然后将 output 展平,将所有内容放在最后一个维度上。 此解决方案不会交替批次暗淡

def norm(inp):
    ## this function operate normalization by channels
    embedding = inp - tf.reduce_min(inp, keepdims=True, axis=[0,1,2])
    embedding = embedding / tf.reduce_max(inp, keepdims=True, axis=[0,1,2])

    return embedding

inp = Input((28,28,3))
x = Conv2D(128, 3, padding='same')(inp)
x = Lambda(norm)(x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)

m = Model(inp, x)
m.compile('adam', 'mse')

X = np.random.uniform(0,1, (10,28,28,3))
y = np.random.uniform(0,1, (10,128))

m.fit(X,y, epochs=3)

2)您可以使用 GlobalPooling 层来减少 4D 维度并重新进行 2D 形状,保留特征维度

inp = Input((28,28,3))
x = Conv2D(128, 3, padding='same')(inp)
x = Lambda(norm)(x)
x = GlobalMaxPool2D()(x) # u can also use GlobalAveragePooling2D

m = Model(inp, x)
m.compile('adam', 'mse')

X = np.random.uniform(0,1, (10,28,28,3))
y = np.random.uniform(0,1, (10,128))

m.fit(X,y, epochs=3)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM