简体   繁体   English

Tensorflow 损耗:NaN; 准确度:0.1

[英]Tensorflow loss: NaN; accuracy: 0.1

Getting NaN loss after ~40000 pictures.在约 40000 张图片后获得 NaN 损失。 Using my own dataset.使用我自己的数据集。 All images are similar(27x48; 1-bit) Example所有图像都相似(27x48;1 位)示例

There are 100.000 images for learning and 40.000 for validation.有 100.000 张图像用于学习,40.000 张用于验证。 I don't have any idea why it reacts like this.我不知道它为什么会有这样的反应。

Model creation code: Model创建代码:

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(48, 27, 1)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('sigmoid'))
model.compile(loss="categorical_crossentropy", optimizer="adam",
              metrics=["accuracy"])

Learning code:学习代码:

datagen = ImageDataGenerator()
dirTrain = "/content/GeneratedI/train"
train_data = datagen.flow_from_directory(dirTrain, target_size=(48, 27), batch_size=15,
                                         class_mode="categorical", color_mode="grayscale")
dirVal = "/content/GeneratedI/val"
validation_data = datagen.flow_from_directory(dirVal, target_size=(48, 27), batch_size=15,
                                              class_mode="categorical", color_mode="grayscale")
print("Training the network...")
t_start = time.time()
history = model.fit_generator(train_data,
                              steps_per_epoch=100000 // 15,
                              epochs=15,
                              validation_data=validation_data,
                              validation_steps=40000 // 15)
print(time.time() - t_start)

Output: Output:

Found 100000 images belonging to 10 classes.
Found 40000 images belonging to 10 classes.
Training the network...
Epoch 1/9
6666/6666 [==============================] - 176s 26ms/step - loss: 0.3099 - accuracy: 0.8985 - val_loss: 0.0268 - val_accuracy: 0.9906
Epoch 2/9
6666/6666 [==============================] - 171s 26ms/step - loss: 0.0470 - accuracy: 0.9851 - val_loss: 0.0150 - val_accuracy: 0.9958
Epoch 3/9
6666/6666 [==============================] - 170s 26ms/step - loss: 0.0336 - accuracy: 0.9900 - val_loss: 0.0112 - val_accuracy: 0.9968
Epoch 4/9
6666/6666 [==============================] - 171s 26ms/step - loss: 0.0283 - accuracy: 0.9918 - val_loss: 0.0104 - val_accuracy: 0.9971
Epoch 5/9
6666/6666 [==============================] - 173s 26ms/step - loss: 0.0269 - accuracy: 0.9928 - val_loss: 0.0055 - val_accuracy: 0.9988
Epoch 6/9
6666/6666 [==============================] - 170s 25ms/step - loss: 0.0266 - accuracy: 0.9938 - val_loss: 0.0035 - val_accuracy: 0.9992
Epoch 7/9
6666/6666 [==============================] - 171s 26ms/step - loss: nan - accuracy: 0.2285 - val_loss: nan - val_accuracy: 0.1000
Epoch 8/9
6666/6666 [==============================] - 175s 26ms/step - loss: nan - accuracy: 0.1000 - val_loss: nan - val_accuracy: 0.1000
Epoch 9/9
6666/6666 [==============================] - 171s 26ms/step - loss: nan - accuracy: 0.1000 - val_loss: nan - val_accuracy: 0.1000

PS I made 9 epochs to not waste my time, and just get an error for showing here PS 我做了 9 个 epoch 以不浪费我的时间,只是在这里显示时出错

Changed Activation on Output layer to SoftMax and it works!将 Output 层上的激活更改为 SoftMax,它可以工作了!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM