简体   繁体   中英

Loss decrasing for both Validation loss and training losss while accuracy stays the same

I'm fairly new to Tensorflow and I made a simple program to determine the difference between cats and dogs. When I ran it my accuracy was always around the 50% mark with loss decreasing. This is the same with the validation loss the validation accuracy. The validation loss is decreasing while it says the validdation accuracy is always.45. This is my code:

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import time
#dataset from folders
img_width = 300
img_height = 300
batch_size = 2
model = keras.Sequential([
    layers.RandomFlip("horizontal_and_vertical"),
    layers.RandomRotation(0.2),
    layers.Input((300,300,1)),
    layers.Conv2D(64,3,padding="same", activation="relu"),#layers, dimentions of layers
    layers.AveragePooling2D(),
    layers.Conv2D(16,3,padding="same", activation="relu"),
    layers.Dropout(.3),
    layers.MaxPool2D(),
    layers.Flatten(),
    layers.Dense(128),
    layers.Dense(64),
    layers.Dense(32),
    layers.Dense(2, input_dim=5,
    kernel_initializer='ones',
    kernel_regularizer=tf.keras.regularizers.L1(0.01),
    activity_regularizer=tf.keras.regularizers.L2(0.01))
])
ds_train = tf.keras.preprocessing.image_dataset_from_directory(
    r"/content/drive/MyDrive/PetPictures",
    labels="inferred",
    label_mode = "int",
    color_mode = "grayscale",
    batch_size = batch_size,
    image_size=(img_height, img_width),
    validation_split = 0.1,
    subset = "training",
    seed = 12
)
ds_validation = tf.keras.preprocessing.image_dataset_from_directory(
    r"/content/drive/MyDrive/PetPictures",
    labels='inferred',
    label_mode = "int", #catagorical, binary
    color_mode = 'grayscale',
    batch_size = batch_size,
    image_size=(img_height, img_width),
    validation_split = 0.1,
    subset = "validation",
    seed = 12
    )
model.compile(
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=tf.keras.optimizers.Adam(learning_rate=.0001),
    metrics=['accuracy']
)
model.fit(ds_train,batch_size=100, epochs = 10,validation_data = ds_validation, verbose=1)

It's just acting really weird. These are the results im getting Loss and accuracy

you have this code in your model

layers.Dense(2, input_dim=5,
    kernel_initializer='ones',
    kernel_regularizer=tf.keras.regularizers.L1(0.01),
    activity_regularizer=tf.keras.regularizers.L2(0.01))

input_dim=5 doesn't belong in a dense layer specification also you should include a rescaling layer to put the pixels in the range from 0 to 1 or better yet in the range from -1 to +1. Documentation for rescaling layer is here. I would use

layers.Rescaling(1.0/255.0, offset=-1)

I think you need to put the input layer as the first layer. I would remove the two augmentation layer for now till you get your model working then you can add them in later if your model is over-fitting. I would also add another dropout layer after the dense32 layer.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM