简体   繁体   中英

Keras Multi-layer Neural Network Accuracy

I've built a simplistic multi-layer NN using Keras with precipitation data in Australia . The code takes 4 input columns: ['MinTemp', 'MaxTemp', 'Rainfall', 'WindGustSpeed'] and trains against the RainTomorrow output.

I've partitioned the data into training/test buckets, transformed all values into 0 <= n <= 1 . When I trying to run model.fit , my loss values steady at ~13.2, but my accuracy is always 0.0. An example of logged fitting intervals are:

...
Epoch 37/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1274 - acc: 0.0000e+00 - val_loss: -16.1168 - val_acc: 0.0000e+00
Epoch 38/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1457 - acc: 0.0000e+00 - val_loss: -16.1168 - val_acc: 0.0000e+00
Epoch 39/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1315 - acc: 0.0000e+00 - val_loss: -16.1168 - val_acc: 0.0000e+00
Epoch 40/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1797 - acc: 0.0000e+00 - val_loss: -16.1168 - val_acc: 0.0000e+00
Epoch 41/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1844 - acc: 0.0000e+00 - val_loss: -16.1169 - val_acc: 0.0000e+00
Epoch 42/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.2205 - acc: 0.0000e+00 - val_loss: -16.1169 - val_acc: 0.0000e+00
Epoch 43/200
...

How can I amend the following script, so my accuracy grows, and my predication output returns a value between 0 and 1 (0: no rain, 1: rain)?

import keras
import sklearn.model_selection
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler

labelencoder = LabelEncoder()

# read data, replace NaN with 0.0
csv_data = pd.read_csv('weatherAUS.csv', header=0)
csv_data = csv_data.replace(np.nan, 0.0, regex=True)

# Input/output columns scaled to 0<=n<=1
x = csv_data.loc[:, ['MinTemp', 'MaxTemp', 'Rainfall', 'WindGustSpeed']]
y = labelencoder.fit_transform(csv_data['RainTomorrow'])
scaler_x = MinMaxScaler(feature_range =(-1, 1))
x = scaler_x.fit_transform(x)
scaler_y = MinMaxScaler(feature_range =(-1, 1))
y = scaler_y.fit_transform([y])[0]

# Partitioned data for training/testing
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y, test_size=0.2)

# model
model = keras.models.Sequential() 
model.add( keras.layers.normalization.BatchNormalization(input_shape=tuple([x_train.shape[1]])))
model.add(keras.layers.core.Dense(4, activation='relu'))
model.add(keras.layers.core.Dropout(rate=0.5))
model.add(keras.layers.normalization.BatchNormalization())
model.add(keras.layers.core.Dense(4, activation='relu'))
model.add(keras.layers.core.Dropout(rate=0.5))
model.add(keras.layers.normalization.BatchNormalization())
model.add(keras.layers.core.Dense(4, activation='relu'))
model.add(keras.layers.core.Dropout(rate=0.5))
model.add(keras.layers.core.Dense(1,   activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=["accuracy"])

callback_early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='auto')

model.fit(x_train, y_train, batch_size=1024, epochs=200, validation_data=(x_test, y_test), verbose=1, callbacks=[callback_early_stopping])

y_test = model.predict(x_test.values)

在此处输入图片说明

As you can see, the sigmoid activation function that you are using in your neural network output (the last layer) range from 0 to 1.

Note that your label (y) is rescaled to -1 to 1.

I suggest you change the y range to 0 to 1 and keep the sigmoid output.

So the sigmoid Ranges from 0 to 1. Your MinMaxscaler scales data from -1 to 1.

You can fix it by replacing 'sigmoid' in the output layer with 'tanh', as tanh has output ranging from -1 to 1

Both the other answers can be used to address the fact that your network ouput is not in the same range as your y vector values. Either adjust your final layer to a tanh activation, or change the y -vector range to [0,1].

However, your network loss function and metric is defined for classification purposes, where as you are attempting regression (continuous values between [-1, 1]). The most common loss function and accuracy metric to use is the mean sqaured error, or mean absolute errtr. So I suggest you change the following:

model.compile(loss='mse', optimizer='rmsprop', metrics=['mse, 'mae'])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM