简体   繁体   English

keras,val_accuracy,val_loss是loss: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00的问题

[英]keras , val_accuracy, val_loss is loss: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 problem

first of all, i using 100class and use 150 videos per class and, i devide this 80% is training set, 20% is validation set.首先,我使用 100 类,每个 class 使用 150 个视频,我将这 80% 用于训练集,20% 用于验证集。

and under is my code下面是我的代码

def generator(filePath,labelList):
  
  tmp = [[x,y] for x, y in zip(filePath, labelList)]
  np.random.shuffle(tmp)

  Files = [n[0] for n in tmp]
  Labels = [n[1] for n in tmp]

    
  
  for File,Label in zip(Files,Labels):
    File = np.load(File)    
    #x = tf.squeeze(File,1)
    #x = tf.squeeze(x,2)
    #PoolingOutput = tf.keras.layers.AveragePooling1D()(x)
    #PoolingOutput = tf.squeeze(PoolingOutput)
    #x = tf.squeeze(PoolingOutput)
    #---------------------------------------------------------
    x = tf.squeeze(File)

    transformed_label = encoder.transform([Label])
    yield x, transformed_label[0]
     
    train_dataset = tf.data.Dataset.from_generator( generator, args = (TrainFilePath,TrainLabelList), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList)))

train_dataset = train_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE)
#train_dataset = train_dataset.batch(16)

valid_dataset = tf.data.Dataset.from_generator( generator, args = (ValiFilePath, VailLabelPath), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList)))

valid_dataset = valid_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE)
#valid_dataset = valid_dataset.batch(16)

with tf.device(device_name):
  model = Sequential()
  model.add(keras.layers.Input(shape=(20, 2048),))
  model.add(tf.keras.layers.Masking(mask_value=0.))
  model.add(tf.keras.layers.LSTM(256)
  model.add(tf.keras.layers.Dropout(0.5))
  model.add(tf.keras.layers.Dense(128,activation='relu'))
  model.add(tf.keras.layers.Dropout(0.5))
  model.add(tf.keras.layers.Dense(100, activation='softmax'))
  model.compile(optimizer=rmsprop,
              loss='categorical_crossentropy',
              metrics=['accuracy'])
  
  model.fit(train_dataset, epochs=20, validation_data=valid_dataset)

model.save_weights('/content/drive/MyDrive/Resnet50BaseWeight_3.h5', overwrite=True)
model.save("/content/drive/MyDrive/Resnet50Base_3.h5")

and result is like this结果是这样的

Epoch 1/20 1500/1500 [==============================] - 97s 61ms/step - loss: 0.0000e+00 - accuracy: 0.0012 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 2/20 1500/1500 [==============================] - 102s 68ms/step - loss: 0.0000e+00 - accuracy: 0.0086 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 3/20 1500/1500 [==============================] - 91s 60ms/step - loss: 0.0000e+00 - accuracy: 0.0103 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 4/20 1500/1500 [==============================] - 95s 63ms/step - loss: 0.0000e+00 - accuracy: 0.0113 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 5/20 1500/1500 [==============================] - 93s 62ms/step - loss: 0.0000e+00 - accuracy: 0.0103 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 6/20 1500/1500 [==============================] - 92s 61ms/step - loss: 0.0000e+00 - accuracy: 0.0098 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 1/20 1500/1500 [==============================] - 97 秒 61 毫秒/步 - 损失:0.0000e+ 00 - 精度:0.0012 - val_loss:0.0000e+00 - val_accuracy:0.0000e+00 Epoch 2/20 1500/1500 [======================= =======] - 102s 68ms/step - loss: 0.0000e+00 - accuracy: 0.0086 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 3/20 1500/1500 [==== ==========================] - 91 秒 60 毫秒/步 - 损失:0.0000e+00 - 精度:0.0103 - val_loss:0.0000e+00 - val_accuracy:0.0000e+00 Epoch 4/20 1500/1500 [==============================] - 95 秒 63 毫秒/ step - loss: 0.0000e+00 - accuracy: 0.0113 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 5/20 1500/1500 [================ ==============] - 93s 62ms/step - loss: 0.0000e+00 - accuracy: 0.0103 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 6/20 1500 /1500 [==============================] - 92 秒 61 毫秒/步 - 损失:0.0000e+00 - 精度:0.0098 - val_loss:0.0000e+00 - val_accuracy:0.0000e+00

Even if the epoch increases, the accuracy does not increase well anymore即使 epoch 增加,准确度也不会再增加了

And most of the results come out as 0.0000e+00 like that大多数结果都是这样的 0.0000e+00

I don't know what is wrong我不知道怎么了

plz help请帮助

it is the logits shape and categorizes mismatches you need to select 100 target classes with different shape, and forms it into ( 100, 0 ) or equivalent.它是 logits 形状并将您需要的不匹配分类为 select 100 个具有不同形状的目标类,并将其 forms 分类为 ( 100, 0 ) 或等价物。

Sample: Identical is expecting of input types or compared among them, loss function and statistics matrix calculation from estimating and evaluation by step of the same input or different input.示例:相同是输入类型的期望或它们之间的比较,损失 function 和统计矩阵计算来自相同输入或不同输入的逐步估计和评估。 RMS can use with anything still correct but you need to make scopes the estimators. RMS 可以与任何仍然正确的东西一起使用,但您需要将范围作为估算器。

import tensorflow as tf

import os
from os.path import exists

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
None
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
config = tf.config.experimental.set_memory_growth(physical_devices[0], True)
print(physical_devices)
print(config)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class_100_names = ['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 
'bed', 'bee', 'beetle', 'bicycle', 'bottle', 
'bowl', 'boy', 'bridge', 'bus', 'butterfly', 
'camel', 'can', 'castle', 'caterpillar', 
'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 
'cockroach', 'couch', 'crab', 'crocodile', 'cup', 
'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 
'fox', 'girl', 'hamster', 'house', 'kangaroo', 
'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', 
'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 
'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', 
'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 
'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 
'possum', 'rabbit', 'raccoon', 'ray', 'road', 
'rocket', 'rose', 'sea', 'seal', 'shark', 
'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 
'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 
'table', 'tank', 'telephone', 'television', 'tiger', 
'tractor', 'train', 'trout', 'tulip', 'turtle', 
'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm'] 

checkpoint_path = "F:\\models\\checkpoint\\" + os.path.basename(__file__).split('.')[0] + "\\TF_DataSets_01.h5"
checkpoint_dir = os.path.dirname(checkpoint_path)

if not exists(checkpoint_dir) : 
    os.mkdir(checkpoint_dir)
    print("Create directory: " + checkpoint_dir)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Dataset
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar100.load_data(label_mode='fine')

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Initialize
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model = tf.keras.models.Sequential([
    tf.keras.layers.InputLayer(input_shape=( 32, 32, 3 )),
    tf.keras.layers.Normalization(mean=3., variance=2.),
    tf.keras.layers.Normalization(mean=4., variance=6.),
    tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D((2, 2)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Reshape((128, 225)),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96, return_sequences=True, return_state=False)),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96)),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(192, activation='relu'),
    tf.keras.layers.Dense(100),
])

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Callback
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class custom_callback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        if( logs['accuracy'] >= 0.95 ):
            self.model.stop_training = True
    
custom_callback = custom_callback()

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Optimizer
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
optimizer = tf.keras.optimizers.RMSprop(
    learning_rate=0.001,
    rho=0.9,
    momentum=0.0,
    epsilon=1e-07,
    centered=False,
    # decay=None,           # {'lr', 'global_clipnorm', 'clipnorm', 'decay', 'clipvalue'}
    # clipnorm=None,
    # clipvalue=None,
    # global_clipnorm=None,
    # use_ema=False,
    # ema_momentum=0.99,
    # ema_overwrite_frequency=100,
    # jit_compile=True,
    name='RMSprop',
)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Loss Fn
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""                               
lossfn = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=False,
    reduction=tf.keras.losses.Reduction.AUTO,
    name='sparse_categorical_crossentropy'
)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Summary
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
# model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy'])
model.compile(optimizer=optimizer,
    loss=lossfn,
    metrics=['accuracy'])
    
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: FileWriter
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
if exists(checkpoint_path) :
    model.load_weights(checkpoint_path)
    print("model load: " + checkpoint_path)
    input("Press Any Key!")

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Training
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
history = model.fit( train_images, train_labels, batch_size=100, epochs=10000, callbacks=[custom_callback] )
model.save_weights(checkpoint_path)

Output: Compares 100 classes it is not the problem, updates new or none identical input should select favorites classes. Output:比较 100 个类不是问题,更新新的或不相同的输入应该 select 收藏夹类。

2022-11-26 12:02:17.507553: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8100
500/500 [==============================] - 34s 54ms/step - loss: 10.1518 - accuracy: 0.0104
Epoch 2/10000
500/500 [==============================] - 27s 53ms/step - loss: 9.5093 - accuracy: 0.0122
Epoch 3/10000
500/500 [==============================] - 26s 53ms/step - loss: 9.2861 - accuracy: 0.0127
Epoch 4/10000
462/500 [==========================>...] - ETA: 2s - loss: 9.1570 - accuracy: 0.0126

Output: Image categorized, number of types is not effects much but how identical they are, when waiting for the CIFAR100 training... Output:图片已分类,类型数量影响不大,但它们的相同程度如何,在等待 CIFAR100 训练时......

样本

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM