简体   繁体   中英

Average channels of convolutional layer keras

I have a problem. I have built a ConvNet. One hidden before the final output the shape of the output of that hidden layer is (None,64,32,32). What I want is to take the element wise average of those 64 channels. I have tried this:

main_inputs=[]
outputs=[]

def convnet(channels,rows,columns):
        input=Input(shape=(channels,rows,columns))
        main_inputs.append(input)
        conv1=Convolution2D(kernel_size=(3,3) ,filters=64, padding="same")(input)
        activation1= Activation('relu')(conv1)
        conv2=Convolution2D(kernel_size=(3,3), filters=64, padding="same")(activation1)
        activation2 = Activation('relu')(conv2)
        conv3=Convolution2D(kernel_size=(3,3), filters=64, padding="same")(activation2)
        activation3 = Activation('relu')(conv3)
        conv4=Convolution2D(kernel_size=(3,3), filters=channels, padding="same")(activation3)
        out=keras.layers.Average()(conv4)
        activation4 = Activation('linear')(out)
        outputs.append(activation4)
        print(np.shape(outputs))
        model = Model(inputs=main_inputs, outputs=outputs)

        return model

But when I am getting an error:

ValueError: A merge layer should be called on a list of inputs

After that instead of the keras.layer.average I tried with the backend documentation:

out=K.mean(conv4,axis=1)

But I am getting this error:

'Tensor' object has no attribute '_keras_history'

Any ideas?

Let's say conv4 is a tensor with shape (batch_size, nb_channels, 32, 32) . You can average conv4 over the channels' dimension as follows:

out = Lambda(lambda x: K.mean(x, axis=1))(conv4)

The resulting tensor out will have shape (batch_size, 32, 32) . You need to wrap all the backend operations within a Lambda layer, so that the resulting tensors are valid Keras tensors (so that they don't lack some attributes such as _keras_history ).

If you want the shape of out to be (batch_size, 1, 32, 32) instead, you can do:

out = Lambda(lambda x: K.mean(x, axis=1)[:, None, :, :])(conv4)

NOTE: Not tested.

Add my few cents to rvinas answer - there's parameter called keepdims which prevent reducing shape of tensor after applying some operation to it.

keepdims: A boolean, whether to keep the dimensions or not. If keepdims is False, the rank of the tensor is reduced by 1. If keepdims is True, the reduced dimension is retained with length 1.

out = Lambda(lambda x: K.mean(x, axis=1), keepdims=True)(conv4) 

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM