简体   繁体   中英

keras , val_accuracy, val_loss is loss: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 problem

first of all, i using 100class and use 150 videos per class and, i devide this 80% is training set, 20% is validation set.

and under is my code

def generator(filePath,labelList):
  
  tmp = [[x,y] for x, y in zip(filePath, labelList)]
  np.random.shuffle(tmp)

  Files = [n[0] for n in tmp]
  Labels = [n[1] for n in tmp]

    
  
  for File,Label in zip(Files,Labels):
    File = np.load(File)    
    #x = tf.squeeze(File,1)
    #x = tf.squeeze(x,2)
    #PoolingOutput = tf.keras.layers.AveragePooling1D()(x)
    #PoolingOutput = tf.squeeze(PoolingOutput)
    #x = tf.squeeze(PoolingOutput)
    #---------------------------------------------------------
    x = tf.squeeze(File)

    transformed_label = encoder.transform([Label])
    yield x, transformed_label[0]
     
    train_dataset = tf.data.Dataset.from_generator( generator, args = (TrainFilePath,TrainLabelList), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList)))

train_dataset = train_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE)
#train_dataset = train_dataset.batch(16)

valid_dataset = tf.data.Dataset.from_generator( generator, args = (ValiFilePath, VailLabelPath), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList)))

valid_dataset = valid_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE)
#valid_dataset = valid_dataset.batch(16)

with tf.device(device_name):
  model = Sequential()
  model.add(keras.layers.Input(shape=(20, 2048),))
  model.add(tf.keras.layers.Masking(mask_value=0.))
  model.add(tf.keras.layers.LSTM(256)
  model.add(tf.keras.layers.Dropout(0.5))
  model.add(tf.keras.layers.Dense(128,activation='relu'))
  model.add(tf.keras.layers.Dropout(0.5))
  model.add(tf.keras.layers.Dense(100, activation='softmax'))
  model.compile(optimizer=rmsprop,
              loss='categorical_crossentropy',
              metrics=['accuracy'])
  
  model.fit(train_dataset, epochs=20, validation_data=valid_dataset)

model.save_weights('/content/drive/MyDrive/Resnet50BaseWeight_3.h5', overwrite=True)
model.save("/content/drive/MyDrive/Resnet50Base_3.h5")

and result is like this

Epoch 1/20 1500/1500 [==============================] - 97s 61ms/step - loss: 0.0000e+00 - accuracy: 0.0012 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 2/20 1500/1500 [==============================] - 102s 68ms/step - loss: 0.0000e+00 - accuracy: 0.0086 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 3/20 1500/1500 [==============================] - 91s 60ms/step - loss: 0.0000e+00 - accuracy: 0.0103 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 4/20 1500/1500 [==============================] - 95s 63ms/step - loss: 0.0000e+00 - accuracy: 0.0113 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 5/20 1500/1500 [==============================] - 93s 62ms/step - loss: 0.0000e+00 - accuracy: 0.0103 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 6/20 1500/1500 [==============================] - 92s 61ms/step - loss: 0.0000e+00 - accuracy: 0.0098 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00

Even if the epoch increases, the accuracy does not increase well anymore

And most of the results come out as 0.0000e+00 like that

I don't know what is wrong

plz help

it is the logits shape and categorizes mismatches you need to select 100 target classes with different shape, and forms it into ( 100, 0 ) or equivalent.

Sample: Identical is expecting of input types or compared among them, loss function and statistics matrix calculation from estimating and evaluation by step of the same input or different input. RMS can use with anything still correct but you need to make scopes the estimators.

import tensorflow as tf

import os
from os.path import exists

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
None
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
config = tf.config.experimental.set_memory_growth(physical_devices[0], True)
print(physical_devices)
print(config)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class_100_names = ['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 
'bed', 'bee', 'beetle', 'bicycle', 'bottle', 
'bowl', 'boy', 'bridge', 'bus', 'butterfly', 
'camel', 'can', 'castle', 'caterpillar', 
'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 
'cockroach', 'couch', 'crab', 'crocodile', 'cup', 
'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 
'fox', 'girl', 'hamster', 'house', 'kangaroo', 
'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', 
'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 
'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', 
'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 
'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 
'possum', 'rabbit', 'raccoon', 'ray', 'road', 
'rocket', 'rose', 'sea', 'seal', 'shark', 
'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 
'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 
'table', 'tank', 'telephone', 'television', 'tiger', 
'tractor', 'train', 'trout', 'tulip', 'turtle', 
'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm'] 

checkpoint_path = "F:\\models\\checkpoint\\" + os.path.basename(__file__).split('.')[0] + "\\TF_DataSets_01.h5"
checkpoint_dir = os.path.dirname(checkpoint_path)

if not exists(checkpoint_dir) : 
    os.mkdir(checkpoint_dir)
    print("Create directory: " + checkpoint_dir)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Dataset
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar100.load_data(label_mode='fine')

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Initialize
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model = tf.keras.models.Sequential([
    tf.keras.layers.InputLayer(input_shape=( 32, 32, 3 )),
    tf.keras.layers.Normalization(mean=3., variance=2.),
    tf.keras.layers.Normalization(mean=4., variance=6.),
    tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D((2, 2)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Reshape((128, 225)),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96, return_sequences=True, return_state=False)),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96)),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(192, activation='relu'),
    tf.keras.layers.Dense(100),
])

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Callback
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class custom_callback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        if( logs['accuracy'] >= 0.95 ):
            self.model.stop_training = True
    
custom_callback = custom_callback()

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Optimizer
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
optimizer = tf.keras.optimizers.RMSprop(
    learning_rate=0.001,
    rho=0.9,
    momentum=0.0,
    epsilon=1e-07,
    centered=False,
    # decay=None,           # {'lr', 'global_clipnorm', 'clipnorm', 'decay', 'clipvalue'}
    # clipnorm=None,
    # clipvalue=None,
    # global_clipnorm=None,
    # use_ema=False,
    # ema_momentum=0.99,
    # ema_overwrite_frequency=100,
    # jit_compile=True,
    name='RMSprop',
)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Loss Fn
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""                               
lossfn = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=False,
    reduction=tf.keras.losses.Reduction.AUTO,
    name='sparse_categorical_crossentropy'
)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Summary
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
# model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy'])
model.compile(optimizer=optimizer,
    loss=lossfn,
    metrics=['accuracy'])
    
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: FileWriter
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
if exists(checkpoint_path) :
    model.load_weights(checkpoint_path)
    print("model load: " + checkpoint_path)
    input("Press Any Key!")

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Training
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
history = model.fit( train_images, train_labels, batch_size=100, epochs=10000, callbacks=[custom_callback] )
model.save_weights(checkpoint_path)

Output: Compares 100 classes it is not the problem, updates new or none identical input should select favorites classes.

2022-11-26 12:02:17.507553: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8100
500/500 [==============================] - 34s 54ms/step - loss: 10.1518 - accuracy: 0.0104
Epoch 2/10000
500/500 [==============================] - 27s 53ms/step - loss: 9.5093 - accuracy: 0.0122
Epoch 3/10000
500/500 [==============================] - 26s 53ms/step - loss: 9.2861 - accuracy: 0.0127
Epoch 4/10000
462/500 [==========================>...] - ETA: 2s - loss: 9.1570 - accuracy: 0.0126

Output: Image categorized, number of types is not effects much but how identical they are, when waiting for the CIFAR100 training...

样本

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM