简体   繁体   中英

Looks like keras Normalization layer doesn't denormalize properly

I want to use keras Normalization layer to "denormalize" my output. The doc for this object says the argument "invert=True" does exactly that, but it doesn't behave as I thought at all...

I tried to isolate the problem and show that it doesn't compute the inverse of the normalization

import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras import layers

norm = layers.Normalization()
denorm = layers.Normalization(invert=True)
y = np.array([[10.0], 
              [20.0], 
              [30.0]])
norm.adapt(y)
denorm.adapt(y)

Here I checked the mean and variance and it looks like it is the same for both, all good for now.

print(norm(20))
print(denorm(0))

I get as output 0 and 163.29932 instead of 0 and 20... It looks like the denormalization adds the mean and then multiply by std instead of multiplying by std first.

The keras version is probably relevant here:

print(keras.__version__)

Output: '2.10.0'

I looked in the code and found that it was indeed an error (adding the mean and then multiplying by the standard deviation).

I opened a pull request to fix this problem and it has been accepted, so it should be fixed in a short while.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM