简体   繁体   中英

why my Tensorflow CNN's accuracy is zero while loss is not?

i am trying to make twin CNN, my database has two inputs that finally merge together and a single neuron as output of IC50.

when i'm trying to do so, i get 0 accuracy while the loss is ok. am i using the wrong loss function? it is currently mean_squared_error

OS: Windows10 tensorflow version:2.3.0

my code"

encoded_drugs=np.load('encoded_drugs.npy')
encoded_cells=np.load('encoded_cells.npy')
encoded_ICs=np.load('encoded_ICs.npy')
encoded_drugs_train, encoded_drugs_test,encoded_cells_train, encoded_cells_test, encoded_ICs_train, encoded_ICs_test = train_test_split(encoded_drugs,encoded_cells, encoded_ICs, test_size=0.2)


input1=keras.layers.Input(shape=(139,32,))
x1=keras.layers.Flatten(input_shape=(139,32,))(input1)
x2=keras.layers.Dense(64,activation='relu')(x1)
x3=keras.layers.Dense(64,activation='relu')(x2)

input2=keras.layers.Input(shape=(735,2,))
y1=keras.layers.Flatten(input_shape=(735,2,))(input2)
y2=keras.layers.Dense(128,activation='relu')(y1)
y3=keras.layers.Dense(64,activation='relu')(y2)

merged=keras.layers.concatenate([x3,y3],axis=-1)

z=keras.layers.Dense(64,activation='relu')(merged)
out=keras.layers.Dense(1,activation='sigmoid')(z)

model=keras.models.Model(inputs=[input1,input2], outputs=out)

model.compile(optimizer='sgd',loss='mean_squared_error',metrics=['accuracy'])

model.fit([encoded_drugs_train,encoded_cells_train],encoded_ICs_train,validation_split = 0.2,epochs=2)

test_loss, test_accuracy= model.evaluate([encoded_drugs_test,encoded_cells_test],encoded_ICs_test)

print('Accuracy=', test_accuracy)

my output:

2020-02-18 11:06:00.759824: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  AVX AVX2
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
2020-02-18 11:06:00.774869: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance.
Train on 75793 samples, validate on 18949 samples
Epoch 1/2
75793/75793 [==============================] - 17s 229us/sample - loss: 10.3671 - accuracy: 0.0000e+00 - val_loss: 10.4082 - val_accuracy: 0.0000e+00
Epoch 2/2
75793/75793 [==============================] - 11s 146us/sample - loss: 10.2673 - accuracy: 0.0000e+00 - val_loss: 10.3852 - val_accuracy: 0.0000e+00


3s 125us/sample - loss: 8.3239 - accuracy: 0.0000e+00
Accuracy= 0.0

You are trying to solve a regression problem(using the mean_squared_error loss) while using the accuracy as a metric. In such a case, the accuracy is not a valid metric.

First of all, ensure that the problem you are trying to solve is indeed a regression or a classification one.

In case of the regression, use Dense(1,activation='linear') as the last output layer, and model.compile(optimizer='sgd',loss='mean_squared_error',metrics=['mse']) .

In case of the classification, use Dense(1,activation='sigmoid') as the last output layers, and model.compile(optimizer='sgd',loss='binary_crossentropy',metrics=['accuracy']) .

Second of all, you need to train for more epochs(29 seconds is really not enough to provide a good overview on your results).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM