[英]Keras custom layer with no different output_shape
我试图在Keras中实现一个层,该层在每个输入中逐个元素地增加权重。 因此,输入,权重和输出具有完全相同的形状。 尽管如此,我仍在努力实现这一点,并且我还没有找到任何不更改输入形状的自定义层的示例。
从keras.engine.topology导入层将keras.backend导入为K
SumationLayer(Layer)类:
def __init__(self, **kwargs):
self.output_dim = K.placeholder(None)
super(SumationLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
super(SumationLayer, self).build(input_shape) # Be sure to call this somewhere!
self.output_dim = (input_shape[0], self.output_dim)
def call(self, x):
return x + self.kernel
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
这将输出以下错误:
TypeError: Value passed to parameter 'shape' has DataType float32 not in list of allowed values: int32, int64
如果像Keras示例一样实现该层,则在初始化时必须输入输出形状,这会产生不希望的行为(通过完全连接输入来使输出变平)。
使用代码,我得到了这样的效果:但是,这仅适用于二维张量。 如果需要3维张量,则还需要包含input_shape [3]。
from keras.layers import Layer, Input
from keras import backend as K
from keras import Model
import tensorflow as tf
class SumationLayer(Layer):
def __init__(self, **kwargs):
super(SumationLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], input_shape[2]),
initializer='uniform',
trainable=True)
super(SumationLayer, self).build(input_shape) # Be sure to call this somewhere!
def call(self, x):
return x + self.kernel
def compute_output_shape(self, input_shape):
return (input_shape[0], input_shape[1], input_shape[2])
input = Input(shape = (10,10))
output = SumationLayer()(input)
model = Model(inputs = [input], outputs = [output])
model.summary()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.