[英]Wrap python callable in keras layer
In keras / tensorflow it is often quite simple to describe layers directly as functions that map their input to an output, like so:在 keras / tensorflow 中,将层直接描述为函数通常非常简单 map 它们的输入到 output,如下所示:
def resnet_block(x, kernel_size):
ch = x.shape[-1]
out = Conv2D(ch, kernel_size, strides = (1,1), padding='same', activation='relu')(x)
out = Conv2D(ch, kernel_size, strides = (1,1), padding='same', activation='relu')(out)
out = Add()([x,out])
return out
whereas subclassing Layer
to get something like而子类化
Layer
得到类似的东西
r = ResNetBlock(kernel_size=(3,3))
y = r(x)
is a little more cumbersome (or even a lot more cumbersome for more complex examples).有点麻烦(对于更复杂的例子甚至更麻烦)。
Since keras seems perfectly happy to construct the underlying weights of its layers when they're being called for the first time, I was wondering if it was possible to just wrap functions such as the one above and let keras figure things out once there are inputs, ie I would like it to look like this:由于 keras 在第一次被调用时似乎非常乐意构建其层的底层权重,我想知道是否有可能只包装上面的函数并让 keras 在有输入时解决问题,即我希望它看起来像这样:
r = FunctionWrapperLayer(lambda x:resnet_block(x, kernel_size=(3,3)))
y = r(x)
I've made an attempt at implementing FunctionWrapperLayer
, which looks as follows:我尝试实现
FunctionWrapperLayer
,如下所示:
class FunctionWrapperLayer(Layer):
def __init__(self, fn):
super(FunctionWrapperLayer, self).__init__()
self.fn = fn
def build(self, input_shape):
shape = input_shape[1:]
inputs = Input(shape)
outputs = self.fn(inputs)
self.model = Model(inputs=inputs, outputs=outputs)
self.model.compile()
def call(self, x):
return self.model(x)
This looks like it might work, however I've run into some bizarre issues whenever I use activations, eg with这看起来可能有效,但是每当我使用激活时,我都会遇到一些奇怪的问题,例如
def bad(x):
out = tf.keras.activations.sigmoid(x)
out = Conv2D(1, (1,1), strides=(1,1), padding='same')(out)
return out
x = tf.constant(tf.reshape(tf.range(48,dtype=tf.float32),[1,4,-1,1])
w = FunctionWrapperLayer(bad)
w(x)
I get the following error我收到以下错误
FailedPreconditionError: Error while reading resource variable _AnonymousVar34 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar34/class tensorflow::Var does not exist.
[[node conv2d_6/BiasAdd/ReadVariableOp (defined at <ipython-input-33-fc380d9255c5>:12) ]] [Op:__inference_keras_scratch_graph_353]
What this suggests to me is that there is something inherently wrong with initializing models like that in the build method.这对我来说意味着在构建方法中初始化模型存在一些固有的错误。 Maybe someone has a better idea as to what might be going on there or how else to get the functionality I would like.
也许有人对那里可能发生的事情或如何获得我想要的功能有更好的想法。
Update: As mentioned by jr15, the above does work when the function involved only uses keras layers.更新:如 jr15 所述,当涉及的 function 仅使用 keras 层时,上述内容确实有效。 However, the following ALSO works, which has me a little puzzled:
然而,以下也有效,这让我有点困惑:
i = Input(x.shape[1:])
o = bad(i)
model = Model(inputs=i, outputs=o)
model(x)
Incidentally, model.submodules
yields顺便说一下,
model.submodules
收益
(<tensorflow.python.keras.engine.input_layer.InputLayer at 0x219d80c77c0>,
<tensorflow.python.keras.engine.base_layer.TensorFlowOpLayer at 0x219d7afc820>,
<tensorflow.python.keras.layers.convolutional.Conv2D at 0x219d7deafa0>)
meaning the activation is automatically turned into a "TensorFlowOpLayer" when doing it like that.这意味着当这样做时,激活会自动变成一个“TensorFlowOpLayer”。
Another update: Looking at the original error message, it seems like the activation isn't the only culprit.另一个更新:查看原始错误消息,激活似乎不是唯一的罪魁祸首。 If I remove the convolution and use the wrapper everything works as well and again I find a "TensorFlowOpLayer" when inspecting the submodules.
如果我删除卷积并使用包装器,一切正常,我在检查子模块时再次发现“TensorFlowOpLayer”。
You solution actually works!您的解决方案确实有效! The trouble you're running into is that
tf.keras.activations.sigmoid
is not a Layer, but a plain Tensorflow function. To make it work, use keras.layers.Activation("sigmoid")(x)
instead.您遇到的问题是
tf.keras.activations.sigmoid
不是图层,而是普通的 Tensorflow function。要使其正常工作,请改用keras.layers.Activation("sigmoid")(x)
。 For the more general case, where you want to use some Tensorflow function as a layer, you can wrap it in a Lambda layer like so:对于更一般的情况,如果您想使用一些 Tensorflow function 作为层,您可以将它包裹在 Lambda 层中,如下所示:
out = keras.layers.Lambda(lambda x: tf.some_function(x))(out)
See the docs for more info: https://keras.io/api/layers/core_layers/lambda/有关详细信息,请参阅文档: https://keras.io/api/layers/core_layers/lambda/
With Tensorflow 2.4 it apparently just works now.使用 Tensorflow 2.4 显然现在就可以使用了。 The submodules now show a "TFOpLambda" layer.
子模块现在显示“TFOpLambda”层。
To anybody interested, here is some slightly improved wrapper code that also accommodates multi-input models:对于任何感兴趣的人,这里有一些稍微改进的包装器代码,它也适用于多输入模型:
class FunctionWrapperLayer(Layer):
def __init__(self, fn):
super(FunctionWrapperLayer, self).__init__()
self.fn = fn
def build(self, input_shapes):
super(FunctionWrapperLayer, self).build(input_shapes)
if type(input_shapes) is list:
inputs = [Input(shape[1:]) for shape in input_shapes]
else:
inputs = Input(input_shapes[1:])
outputs = self.fn(inputs)
self.fn_model = Model(inputs=inputs, outputs=outputs)
self.fn_model.compile()
def call(self, x):
return self.fn_model(x)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.