简体   繁体   中英

How to achieve elementwise convolution for two tensors using tensorflow?

In my problem, I want to convolve two tensors in my neural network model.

The shape of two tensors is [None, 2, 1], [None, 3, 1] respectively. The axis with dimension None means the batch size of the input tensor. For each sample in batch, I want to convolve the two tensors with shape [2, 1] and [3, 1].

However, the tf.nn.conv1d in TensorFlow can only convolve the input with a fixed kernel. Is there any function that can support the convolution of two tensors according to the batch size axis, similar to the tf.multiply which can multiply two tensors for each sample or just elementwise multiplication.

The code I ran can be simplified as follows:

input_signal = Input(shape=(L, M), name='input_signal')
input_h = Input(shape=(N), name='input_h')
faded= Lambda(lambda x: tf.nn.conv1d(input, x))(input_h) 

What I want to do is that the sample of input_signal can be convolved by the sample of input_h with the same index. However, it just shows my pure idea which can not be able to run in the env. My question is that how I can modify the code to enable the input tensor can be convolved with another input tensor for every sample in the batch.

According to the description of the kernel size arguments for Conv1D layer or any other layer mentioned in the documentation , you cannot add multiple filters with different Kernel size or strides.

Also, Convolutions with Kernels of different sizes will produce outputs of different height and width. The general formula for output size assuming a symmetric kernel is given by

(X−K+2P)/S+1

  • Where X is the input Height / Width
  • K is the Kernel size
  • P is the zero-padding
  • S is the stride length

So assuming you are keeping zero paddings and stride same you cannot have multiple kernels with different sizes in ConvD layer.

You can, however, use the tf.keras.Model API to create Conv1D multiple times on the same input OR multiple Conv1D Layer for different inputs and kernel size respectively in your case and then either maxpool, crop or use zero paddings to match the dimensions of the different outputs before stacking them.

Example:

inputs = tf.keras.Input(shape=(n_timesteps,n_features))
x1 = tf.keras.layers.Conv1D(filters=32, kernel_size=2)(inputs)
x2 = tf.keras.layers.Conv1D(filters=16, kernel_size=3)(inputs)
#match dimensions (height and width) of x1 or x2 here 
x3 = tf.keras.layers.Concatenate(axis=-1)[x1,x2]

You can use either Zeropadding1D or Cropping2D or Maxpool1D for matching the dimensions.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM