[英]Matconvnet output of deep network's marix is uniform valued instead of varying values?
Im trying to achieve a density map from network output of dimension 20x20x1x50. 我试图从尺寸为20x20x1x50的网络输出中获得密度图。 Here 20x20 is the output map and 50 is the batch size.
这里20x20是输出映射,50是批量。
The issue is that the value of output X is equal 0.098 across each output matrix..20x20. 问题是每个输出矩阵上的输出X的值等于0.098..20x20。 There is no gaussian shape like density map but a flat similar valued output map 20x20x1x50.
没有像密度图这样的高斯形状,而是具有20x20x1x50的平坦相似值的输出图。 The issue is shown in the figure attached.
问题显示在附图中。 What am i missing here?
我在这里想念什么? The euclidean loss for backpropagation is given as:
反向传播的欧几里得损失为:
case {'l2loss'}
res=(c-X);
n=1;
if isempty(dzdy) %forward
Y = sum((res(:).^2))/numel(res);
else
Y_= -1.*(c-X);
Y = 2*single (Y_ * (dzdy / n) );
end
Found the solution at https://github.com/vlfeat/matconvnet/issues/313 . 在https://github.com/vlfeat/matconvnet/issues/313找到了解决方案。 Query conv.var(i).value to see where the value falls, and edit that layer in the conv net.
查询conv.var(i).value以查看该值所在的位置,然后在conv网络中编辑该层。 In my case I had to change biases of the conv layers
就我而言,我必须更改转换层的偏差
net2.params(8).value= 0.01*init_bias*ones(1, 128, 'single');%'biases', net2.params(8).value = 0.01 * init_bias * ones(1,128,'single');%'biases',
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.