[英]How to interpret this CNN architecture
How does this CNN architecture work from an input layer to the first convolution layer?这种 CNN 架构如何从输入层到第一个卷积层工作? hx98 are input matrix dimensions, is n the number of channels or the number of inputs?
hx98 是输入矩阵维度,n 是通道数还是输入数?
It doesn't seem like n is the number of channels because 25 is the number of feature maps and their dimensions do not indicate they are two channels.看起来 n 不是通道数,因为 25 是特征图的数量,并且它们的尺寸并不表示它们是两个通道。
However if n is the number of inputs and matrices are single channel, I haven't found a single CNN architecture anywhere that takes multiple input matrices and convolute them together.但是,如果 n 是输入的数量并且矩阵是单通道的,那么我在任何地方都没有找到一个单一的 CNN 架构,它需要多个输入矩阵并将它们卷积在一起。 Most example convolute them seperately and then concatenate.
大多数示例将它们分别卷积然后连接。
In my example, n is 2, one is matrix with BER values and another with connection line-rate values.在我的示例中,n 为 2,一个是具有 BER 值的矩阵,另一个是具有连接线速率值的矩阵。
What mistake am I making?我犯了什么错误? How does this CNN work.
这个 CNN 是如何工作的。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.