简体   繁体   English

TensorFlow中的自定义卷积层

[英]Customized convolutional layer in TensorFlow

Let's assume i want to make the following layer in a neural network: Instead of having a square convolutional filter that moves over some image, I want the shape of the filter to be some other shape, say a rectangle, circle, triangle, etc (this is of course a silly example; the real case I have in mind is something different). 假设我想在神经网络中创建以下图层:我不希望将正方形卷积滤镜移到某些图像上,而是希望滤镜的形状为其他形状,例如矩形,圆形,三角形等(这当然是一个愚蠢的例子;我想到的实际情况有所不同)。 How would I implement such a layer in TensorFlow? 我如何在TensorFlow中实现这样的一层?

I found that one can define custom layers in Keras by extending tf.keras.layers.Layer , but the documentation is quite limited without many examples. 我发现可以通过扩展tf.keras.layers.Layertf.keras.layers.Layer定义自定义图层,但是没有很多示例,文档非常有限。 A python implementation of a convolutional layer by for example extending the tf.keras.layer.Layer would probably help as well, but it seems that the convolutional layers are implemented in C. Does this mean that I have to implement my custom layer in C to get any reasonable speed or would Python TensorFlow operations be enough? 例如通过扩展tf.keras.layer.Layer来实现卷积层的python实现也可能会有所帮助,但是似乎卷积层是在C中实现的。这是否意味着我必须在C中实现我的自定义层以获得合理的速度,或者Python TensorFlow操作是否足够?

Edit: Perhaps it is enough if I can just define a tensor of weights, but where I can customize entries in the tensor that are identically zero and some weights showing up in multiple places in this tensor, then I should be able to by hand build a convolutional layer and other layers. 编辑:也许只要定义权重的张量就足够了,但是在这里我可以自定义张量中的条目完全相同,并且某些权重显示在该张量中的多个位置,那么我应该可以手工构建卷积层和其他层。 How would I do this, and also include these variables in training? 我将如何做,并在训练中包括这些变量?

Edit2: Let me add some more clarifications. Edit2:让我添加更多说明。 We can take the example of building a 5x5 convolutional layer with one output channel from scratch. 我们可以举一个从头开始使用一个输出通道构建5x5卷积层的示例。 If the input is say 10x10 (plus padding so output is also 10x10)), I would imagine doing this by creating a matrix of size 100x100. 如果输入是10x10(加上填充,则输出也是10x10),我可以想象通过创建大小为100x100的矩阵来做到这一点。 Then I would fill in the 25 weights in the correct locations in this matrix (so some entries are zero, and some entries are equal, ie all 25 weights will show up in many locations in this matrix). 然后,我将在此矩阵的正确位置填充25个权重(因此某些条目为零,而某些条目相等,即所有25个权重将显示在此矩阵的许多位置)。 I then multiply the input with this matrix to get an output. 然后,我将输入与此矩阵相乘以得到输出。 So my question would be twofold: 1. How do I do this in TensorFlow ? 所以我的问题是双重的:1.如何在TensorFlow做到这一点? 2. Would this be very inefficient and is some other approach recommended (assuming that I want to later customize what this filter looks like and thus the standard conv2d is not good enough). 2.这会很低效吗?是否建议使用其他方法(假设我以后要自定义此过滤器的外观,因此标准的conv2d不够好)。

Edit3: It seems doable by using sparse tensors and assigning values via a previously defined tf.Variable . Edit3:通过使用稀疏张量并通过先前定义的tf.Variable分配值似乎可行。 However I don't know if this approach will suffer from performance issues. 但是我不知道这种方法是否会遇到性能问题。

Just use regular conv. 只需使用常规转化即可。 layers with square filters, and zero out some values after each weight update: 带有方形滤镜的图层,并在每次权重更新后将一些值归零:

   g = tf.get_default_graph()
   sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
   conv1_filter = g.get_tensor_by_name('conv1:0')
   sess.run(tf.assign(conv1_filter, tf.multiply(conv1_filter, my_mask)))

where my_mask is a binary tensor (of the same shape and type as your filters) that matches the desired pattern. 其中, my_mask是与所需模式匹配的二进制张量(形状和类型与过滤器相同)。

EDIT: if you're not familiar with tensorflow, you might get confused about using the code above. 编辑:如果您不熟悉tensorflow,您可能会对使用上面的代码感到困惑。 I recommend looking at this example , and specifically at the way the model is constructed (if you do it like this you can access first layer filters as 'conv1/weights'). 我建议您看一下这个示例 ,特别是模型的构建方式 (如果您这样做,则可以将第一层过滤器作为“ conv1 / weights”访问)。 Also, I recommend switching to PyTorch :) 另外,我建议切换到PyTorch :)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM