简体   繁体   中英

Multi-dimension input to a neural network

I have a neural network with many layers. I have the input to the neural network of dimension [batch_size, 7, 4] . When this input is passed through the network, I observed that only the third dimension of the input keeps changing, that is if my first layer has 20 outputs, then the output of the second layer is [batch_size, 7, 20] . I need the end result after many layers to be of the shape [batchsize, 16] .

I have the following questions:

  • Are the other two dimensions being used at all?
  • If not, how can I modify my network so that all three dimensions are used?
  • How do I drop one dimension meaningfully to get the 2-d output that I desire?

Following is my current implementation in Tensorflow v1.14 and Python 3 :

out1 = tf.layers.dense(inputs=noisy_data, units=150, activation=tf.nn.tanh)  # Outputs [batch, 7, 150]
out2 = tf.layers.dense(inputs=out1, units=75, activation=tf.nn.tanh)  # Outputs [batch, 7, 75] 
out3 = tf.layers.dense(inputs=out2, units=32, activation=tf.nn.tanh)  # Outputs [batch, 7, 32]
out4 = tf.layers.dense(inputs=out3, units=16, activation=tf.nn.tanh)  # Outputs [batch, 7, 16]

Any help is appreciated. Thanks.

Answer to Question 1 : The data values in 2nd dimension ( axis=1 ) are not being used because if you look at the output of code snippet below (assuming batch_size=2 ):

>>> input1 = tf.placeholder(float, shape=[2,7,4])
>>> tf.layers.dense(inputs=input1, units=150, activation=tf.nn.tanh)
>>> graph = tf.get_default_graph()
>>> graph.get_collection('variables')
[<tf.Variable 'dense/kernel:0' shape=(4, 150) dtype=float32_ref>, <tf.Variable 'dense/bias:0' shape=(150,) dtype=float32_ref>]

you can see that the dense layer ignores values along 2nd dimension. However, the values along 1st dimension would be considered as it is a part of a batch though the offical tensorflow docs doesn't say anything about the required input shape.

Answer to Question 2 : Reshape the input [batch_size, 7, 4] to [batch_size, 28] by using the below line of code before passing the input to the first dense layer:

input1 = tf.reshape(input1, [-1, 7*4])

Answer to Question 3 : If you reshape the inputs as above, there is no need to drop a dimension.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM