简体   繁体   中英

Tensorflow Training accuracy much higher than test accuracy

I am consistently getting around 60% accuracy for the training data but when I try and predict the outcome with my test data the model is about 50% accurate (which is to be expected if it were random).

def train_model(training_data, training_differential, test_data, 
test_differential):
    num_nodes = 180

    model = keras.Sequential([
        keras.layers.Flatten(input_shape=(12, 22)),
        keras.layers.Dense(num_nodes, activation="relu"),
        keras.layers.Dense(2, activation="softmax")
    ])

model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])

model.fit(training_data, training_differential, epochs=20)

predictions = model.predict(test_data)


return predictions

在此处输入图片说明

Not sure if I'm overfitting the data or if I'm using an incorrect model. Any help would be much appreciated!

This could be a lot things, what kind of data is going in and what are your target types? Are they classes like [0,1] or integers [6,2] . If they are classes then you probably don't need sparse categorical crossentropy just categorical crossentropy. If this is a binary classification problem eg cat or dog but never both then your final layer should be of size 1 where the target is a single vector where each value is either 1 or 0.

Also how big is your training set? If it's too small overfitting could well be a problem.

One place to start here regardless might be to reduce the number of nodes and thus the capacity of the network itself. You could also introduce some form regularization to prevent the network from overfitting on the training set.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM