简体   繁体   中英

Change filter on convolutional layer CNN - Python/TensorFlow

I have the following code block:

def new_weights(shape):
    return tf.Variable(tf.truncated_normal(shape, stddev=0.05))

And:

def new_conv_layer(input,              # The previous layer
                   use_pooling=True):  # Use 2x2 max-pooling

    shape = [3, 3, 1, 8]

    weights = new_weights(shape=shape)

    biases = new_biases(length=8)

    layer = tf.nn.conv2d(input=input,
                         filter=weights,
                         strides=[1, 1, 1, 1],
                         padding='SAME')

    layer += biases

    if use_pooling:
        layer = tf.nn.max_pool(value=layer,
                               ksize=[1, 2, 2, 1],
                               strides=[1, 2, 2, 1],
                               padding='SAME')

    layer = tf.nn.relu(layer)

    # relu(max_pool(x)) == max_pool(relu(x)) we can
    # save 75% of the relu-operations by max-pooling first.

    return layer

So we can observe that the size of the filter is 3x3 and the number of filters is 8. And the filters are defined with random values.

What I need to do is to define all my 8 filters with fixed values, ie predetermined values, for example:

weigths = [
    [[0,  1, 0,],[0, -1, 0,],[0,  0, 0,],],
    [[0,  0, 1,],[0, -1, 0,],[0,  0, 0,],],
    [[0,  0, 0,],[0, -1, 1,],[0,  0, 0,],],
    [[0,  0, 0,],[0, -1, 0,],[0,  0, 1,],],
    [[0,  0, 0,],[0, -1, 0,],[0,  1, 0,],],
    [[0,  0, 0,],[0, -1, 0,],[1,  0, 0,],], 
    [[0,  0, 0,],[1, -1, 0,],[0,  0, 0,],],
    [[1,  0, 0,],[0, -1, 0,],[0,  0, 0,],]
]

I can not imagine, how could I do inside my code, this modification, does anyone have any idea how I could do this?

Thank you very much in advance!

You just define the weights as non-trainable and define the new weights as:

new_weights = tf.Variable( tf.reshape(weights, (3,3,1,8)),trainable=False)
# then apply on the inputs 
layer = tf.nn.conv2d(inputs, filter=new_weights, strides=[1, 1, 1, 1], padding='SAME')

If you want initialize weight by some predefined value you can use tf.constant_initializer . If you don't want train this weights, you can define them as tf.constant not tf.Variable

def new_weights(init_vaue, is_const):
    if (is_const) :
        return tf.constant(init_vaue, name='weights')
    else:
        initializer = tf.constant_initializer(init_vaue)
        return tf.get_variable('weights', shape = init_vaue.shape, initializer=initializer)

weights = np.ones([3,3,1,8], dtype=np.float)
print(weights.shape)

value = new_weights(weights, True)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    value_ = sess.run(value) 
    print(value_)

This is how you can do it in TF2:

model = models.Sequential()
# one 3x3 filter
model.add(layers.Conv2D(1, (3, 3), input_shape=(None, None, 1)))
# access to the target layer
layer = model.layers[0]
current_w, current_bias = layer.get_weights()  # see the current weights
new_w = tf.constant([[1,2, 3],
                     [4, 5, 6],
                     [7, 8, 9]])
new_w = tf.reshape(new_w, custom_w.shape)  # fix the shape
new_bias = tf.constant([0])
layer.set_weights([new_w, new_bias])
model.summary()
# let's see ..
tf.print(model.layers[0].get_weights())

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM