
[英]What is the difference between the TensorFlow batch normalization implementations?
[英]Dimensional Difference between Running mean and Sample mean in Batch normalization
我最近通过 cs231n 在线自学,在批量归一化分配中,特别是运行均值计算:
running_mean = momentum * running_mean + (1 - momentum) * sample_mean
running_mean
由
running_mean = bn_param.get("running_mean", np.zeros(D, dtype=x.dtype))
。
所以当你有多个batchnorm层时, running_mean
值继承自最后一个batchnorm层,但sample_mean
是当前层输入获得的,这导致
File ~/assignment/assignment2/cs231n/layers.py:217, in batchnorm_forward(x, gamma, beta, bn_param)
213 out = x_hat * gamma + beta
215 print(running_mean.shape, miu.shape)
--> 217 running_mean = momentum * running_mean + (1 - momentum) * miu
218 running_var = momentum * running_var + (1 - momentum) * sigma_squared
220 cache = miu, sigma_squared, eps, N, x_hat, x, gamma
ValueError: operands could not be broadcast together with shapes (1,20) (1,30)
我在这里错过了什么? 推导似乎是正确的
我尝试实现 batchnorm 层,但 running_mean 和 sample_mean 的维度不同。
这就是我所拥有的:
miu = np.mean(x, axis=0)
var = np.var(x, axis=0)
x_hat = (x - miu) / np.sqrt(var + eps)
out = x_hat * gamma + beta
print(running_mean.shape, miu.shape)
running_mean = momentum * running_mean + (1 - momentum) * miu
running_var = momentum * running_var + (1 - momentum) * var
cache = miu, var, eps, N, x_hat, x, gamma
问题未解决?试试以下方法:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.