简体   繁体   中英

Keras deep learning model always give same acc in training

I want to make a prediction with Keras. But it gives always same acc value in training. But loss is decrasing while in training

I'm trying to predict production parameters. Some examples are given below

Data

So i want to basically predict fill_press parameter from others. My code is here:

x = pd.concat([volume, injector, filling_time, machine], axis=1)


x_train, x_test,y_train,y_test = train_test_split(x,y,test_size=0.2, random_state=1)



predicter = Sequential()

predicter.add(Dense(units=9, use_bias = True,  kernel_initializer = 'RandomUniform', activation = 'linear', input_dim = 9)) #Input Layer

predicter.add(Dense(units=7, use_bias = True,  kernel_initializer = 'RandomUniform', activation = 'linear'))

predicter.add(Dense(units=4, use_bias = True,  kernel_initializer = 'RandomUniform', activation = 'linear'))

predicter.add(Dense(units=1, kernel_initializer = 'RandomUniform', activation = 'linear'))

predicter.compile(optimizer = "sgd", loss = 'mean_absolute_error', metrics = ['accuracy'])

predicter.fit(x_train, y_train, batch_size =10, epochs = 1000)



y_pred = predicter.predict(X_test)

What should i change? Also i'm not sure my model is correct. Do you have any recommendation?

As you can see acc always same(0.1333) from start to end.

Also i should highlight that, i have quite low number of data.

Training output:

Epoch 985/1000
45/45 [==============================] - 0s 337us/step - loss: 0.0990 - acc: 0.1333
Epoch 986/1000
45/45 [==============================] - 0s 289us/step - loss: 0.1006 - acc: 0.1333
Epoch 987/1000
45/45 [==============================] - 0s 266us/step - loss: 0.1003 - acc: 0.1333
Epoch 988/1000
45/45 [==============================] - 0s 355us/step - loss: 0.0997 - acc: 0.1333
Epoch 989/1000
45/45 [==============================] - 0s 199us/step - loss: 0.1003 - acc: 0.1333
Epoch 990/1000
45/45 [==============================] - 0s 167us/step - loss: 0.1001 - acc: 0.1333
Epoch 991/1000
45/45 [==============================] - 0s 200us/step - loss: 0.0997 - acc: 0.1333
Epoch 992/1000
45/45 [==============================] - 0s 222us/step - loss: 0.0987 - acc: 0.1333
Epoch 993/1000
45/45 [==============================] - 0s 304us/step - loss: 0.0984 - acc: 0.1333
Epoch 994/1000
45/45 [==============================] - 0s 244us/step - loss: 0.1001 - acc: 0.1333
Epoch 995/1000
45/45 [==============================] - 0s 332us/step - loss: 0.1006 - acc: 0.1333
Epoch 996/1000
45/45 [==============================] - 0s 356us/step - loss: 0.0999 - acc: 0.1333
Epoch 997/1000
45/45 [==============================] - 0s 332us/step - loss: 0.1014 - acc: 0.1333
Epoch 998/1000
45/45 [==============================] - 0s 394us/step - loss: 0.0988 - acc: 0.1333
Epoch 999/1000
45/45 [==============================] - 0s 269us/step - loss: 0.1013 - acc: 0.1333
Epoch 1000/1000
45/45 [==============================] - 0s 242us/step - loss: 0.0992 - acc: 0.1333

I guess since you have on output unit and a linear activation function for your last dense layer, you are performing regression.

However, accuracy in tensorflow is meant to be used for classification tasks. See the documentation: https://www.tensorflow.org/api_docs/python/tf/keras/metrics/Accuracy .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM