简体   繁体   中英

Keras BatchNormalizing does not produce expected output

I am trying to recreate the BatchNormalizing Layer from Keras in Numpy: (Python)

model = Sequential()
model.add(BatchNormalization(axis=1, center=False, scale=False))
model.compile(optimizer='adam', loss='mse', metrics=['mse'])

scale = np.linspace(0, 100, 1000)
x_train = np.sin(scale) + 2.5
y_train = np.sin(scale)
print(x_train.shape)
print(x_train.shape)

model.fit(x_train, y_train, epochs=100, batch_size=100, shuffle=True, verbose=2)

x_test = np.array([1])

mean = model.layers[0].get_weights()[0]
var = model.layers[0].get_weights()[1]

print('mean', np.mean(x_train), 'mean_tf', mean)
print('var', np.var(x_train), 'var_tf', var)

print('result_tf', model.predict(x_test))
print('result_pred', (x_test - mean) / var)



Why am I not getting the same results?

Same when centered and scale =True. But I wanted to keep it simple. Every other layer like dense or LSTM I already got.

Try with print('result_pred', (x_test - mean) / np.sqrt(var)) and for further explanation check my edit in this answer stackoverflow.com/a/65744394/10733051

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM