简体   繁体   中英

Denormalization of output from neural network

Goodmorning, I have used the MinMax normalization in order to normalize my dataset, both features and label. My question is, it's correct to normalize also the label? If yes, how can I denormalize the output of the neural network (the one that I predict with the test set that is normalized)?

Unfortunately I can't upload the dataset, but it is composed by 18 features and 1 label. It is a regression task, the features and the label are physical quantities.

So the problem is that the y_train_pred e y_test_pred are between 0 and 1. How can I predeict the "real value"? If you find other mistake please tell me.

Thanks.

The code that I use is write below

    dataset = pd.read_csv('DataSet.csv', decimal=',', delimiter = ";")

label = dataset.iloc[:,-1]
features = dataset.drop(columns = ['Label'])

features = features[best_features]

X_train1, X_test1, y_train1, y_test1 = train_test_split(features, label, test_size = 0.25, random_state = 1, shuffle = True)

y_test2 = y_test1.to_frame()
y_train2 = y_train1.to_frame()

scaler1 = preprocessing.MinMaxScaler()
scaler2 = preprocessing.MinMaxScaler()
X_train = scaler1.fit_transform(X_train1)
X_test = scaler2.fit_transform(X_test1)

scaler3 = preprocessing.MinMaxScaler()
scaler4 = preprocessing.MinMaxScaler()
y_train = scaler3.fit_transform(y_train2)
y_test = scaler4.fit_transform(y_test2)

optimizer = tf.keras.optimizers.Adamax(lr=0.001)
model = Sequential()

model.add(Dense(80, input_shape = (X_train.shape[1],), activation = 'relu',kernel_initializer='random_normal'))
model.add(Dropout(0.15))
model.add(Dense(120, activation = 'relu',kernel_initializer='random_normal'))
model.add(Dropout(0.15))
model.add(Dense(80, activation = 'relu',kernel_initializer='random_normal'))

model.add(Dense(1,activation = 'linear'))
model.compile(loss = 'mse', optimizer = optimizer, metrics = ['mse'])

history = model.fit(X_train, y_train, epochs = 300,
                    validation_split = 0.1, shuffle=False,   batch_size=120
                    )
history_dict = history.history

loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']

y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)

You should denormalize so you can get real world predictions to your neural network, rather than a number between 0-1

The min - max normalization is defined by:

z = (x - min)/(max - min)

With z being the normalized value, x being the label value, max being the max x value, and min being the min x value. So if we have z, min, and max we can resolve for x as follows:

x = z(max - min) + min

Thus before you normalize your data, define variables for the max and min value for the label if it is continuous. Then after you get your pred values, you can use the following function:

y_max_pre_normalize = max(label)
y_min_pre_normalize = min(label) 

def denormalize(y):
    final_value = y(y_max_pre_normalize - y_min_pre_normalize) + y_min_pre_normalize 
    return final_value

And apply this function to your y_test/y_pred to get the corresponding value.

You can use this link here to better visualize this.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM