简体   繁体   English

使用 ImageDataGenerator TensorFlow Keras 时精度降低

[英]Accuracy killed when using ImageDataGenerator TensorFlow Keras

I have already made a post here but the answers have not been quite helpful, probably because I did not phrase the question right.我已经在这里发了一篇文章,但答案并不是很有帮助,可能是因为我没有正确表达这个问题。 Now I know more about the problem but still cannot find the solution.现在我对这个问题有了更多的了解,但仍然找不到解决方案。

I tried to build a Convolutional Neural Network in Tensorflow Keras to predict on the CIFAR100 dataset.我尝试在 Tensorflow Keras 中构建一个卷积神经网络来预测 CIFAR100 数据集。 I managed to achive decent results with 60% validation accuracy and wanted to add image augmentation.我设法以 60% 的验证准确率获得了不错的结果,并希望添加图像增强功能。

I noticed a significant drop in accuracy so I decided not to augment any data but to try and see if the results would remain the same if I used the ImageDataGenerator to feed data to the model but without any augmentation .我注意到准确性显着下降,所以我决定不增加任何数据,而是尝试看看如果我使用 ImageDataGenerator 将数据提供给模型但没有任何增加,结果是否会保持不变。

The accuracy drop remained, I tried to check the way the ImageDataGenerator delivers data but everything seems in order, the labels for images also appear to be correct.精度下降仍然存在,我试图检查 ImageDataGenerator 提供数据的方式,但一切似乎都井井有条,图像的标签似乎也正确。 I also noticed that the loss was still decreasing somewhat similarly as when I was not using an ImageDataGenerator.我还注意到,与不使用 ImageDataGenerator 时类似,损失仍在减少。

datagen = ImageDataGenerator(
    
)

datagen.fit(train_images)

history = model.fit(datagen.flow(train_images, train_labels, batch_size=128), shuffle=False, epochs=250, validation_data=(test_images, test_labels), callbacks=[callbacks.EarlyStopping(patience=10)])

# Without ImageDataGenerator
# history = model.fit(train_images, train_labels, batch_size=128, epochs=250, validation_data=(test_images, test_labels), callbacks=[callbacks.EarlyStopping(patience=10)])

I do not think the architecture is important since the problem is with the ImageDataGenerator but if any kind soul wants to check the code and the output here is the google collab notebook link:我认为架构并不重要,因为问题出在 ImageDataGenerator 上,但如果有任何人想要检查代码,这里的输出是 google collab notebook 链接:

https://colab.research.google.com/drive/1Y7UZHp8cXi2dOyMeHW22SkOuWolNhRZQ?usp=sharing https://colab.research.google.com/drive/1Y7UZHp8cXi2dOyMeHW22SkOuWolNhRZQ?usp=sharing

I really do not know what to do anymore.我真的不知道该怎么办了。

EDIT: The accuracy drops from ~0.6 to 0.01 w编辑:精度从 ~0.6 下降到 0.01 w

I have been informed to provide everything in the question so here are the results:我已被告知要提供问题中的所有内容,因此结果如下:

Results when using empty ImageDataGenerator (Interrupted because of little progress)使用空ImageDataGenerator时的结果(因进展不大而中断)

So just to clarify, no data augmentation has been done, ImageDataGenerator is empty.所以只是澄清一下,没有进行数据增强,ImageDataGenerator 是空的。

I tried setting the parameter shuffle of datagen.flow to False , but it had little to no impact on the results.我尝试将datagen.flow的参数shuffle设置为False ,但它对结果几乎没有影响。

>Downloading data from https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz
>
>169009152/169001437 [==============================] - 2s 0us/step
>
>Epoch 1/250
>
>  2/390 [..............................] - ETA: 11s - loss: 5.3008 - accuracy: 0.0000e+00WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0238s vs `on_train_batch_end` time: 0.0366s). Check your callbacks.
>
>390/390 [==============================] - 29s 74ms/step - loss: 4.5465 - accuracy: 0.0041 - val_loss: 4.6752 - val_accuracy: 0.0000e+00
>
>Epoch 2/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 4.1575 - accuracy: 0.0067 - val_loss: 4.5212 - val_accuracy: 0.0019
>
>Epoch 3/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 3.9204 - accuracy: 0.0115 - val_loss: 4.3019 - val_accuracy: 0.0034
>
>Epoch 4/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 3.6618 - accuracy: 0.0180 - val_loss: 3.8335 - val_accuracy: 0.0383
>
>Epoch 5/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 3.3415 - accuracy: 0.0174 - val_loss: 3.3168 - val_accuracy: 0.0369
>
>Epoch 6/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 3.0612 - accuracy: 0.0132 - val_loss: 3.3109 - val_accuracy: 0.0076
>
>Epoch 7/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 2.8365 - accuracy: 0.0121 - val_loss: 3.0244 - val_accuracy: 0.0249
>
>Epoch 8/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 2.6400 - accuracy: 0.0120 - val_loss: 2.7754 - val_accuracy: 0.0232
>
>Epoch 9/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 2.4838 - accuracy: 0.0110 - val_loss: 2.7786 - val_accuracy: 0.0085
>
>Epoch 10/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 2.3305 - accuracy: 0.0102 - val_loss: 2.2827 - val_accuracy: 0.0191
>
>Epoch 11/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 2.1901 - accuracy: 0.0107 - val_loss: 2.2275 - val_accuracy: 0.0089
>
>Epoch 12/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 2.0822 - accuracy: 0.0104 - val_loss: 2.1312 - val_accuracy: 0.0197
>
>Epoch 13/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.9752 - accuracy: 0.0106 - val_loss: 2.2580 - val_accuracy: 0.0253
>
>Epoch 14/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.8751 - accuracy: 0.0105 - val_loss: 1.9996 - val_accuracy: 0.0122
>
>Epoch 15/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.7874 - accuracy: 0.0103 - val_loss: 2.0046 - val_accuracy: 0.0085
>
>Epoch 16/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.7062 - accuracy: 0.0099 - val_loss: 1.9315 - val_accuracy: 0.0140
>
>Epoch 17/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.6240 - accuracy: 0.0102 - val_loss: 1.8867 - val_accuracy: 0.0079
>
>Epoch 18/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.5656 - accuracy: 0.0099 - val_loss: 1.8539 - val_accuracy: 0.0117
>
>Epoch 19/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.4992 - accuracy: 0.0101 - val_loss: 1.8715 - val_accuracy: 0.0124
>
>Epoch 20/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.4285 - accuracy: 0.0102 - val_loss: 1.7864 - val_accuracy: 0.0092
>
>Epoch 21/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.3764 - accuracy: 0.0100 - val_loss: 1.8202 - val_accuracy: 0.0119
>
>Epoch 22/250
>
>159/390 [===========>..................] - ETA: 14s - loss: 1.2974 - accuracy: 0.0106
>
>---------------------------------------------------------------------------
>
>KeyboardInterrupt

Results without ImageDataGenerator没有 ImageDataGenerator 的结果

>Epoch 1/250
>
>  2/391 [..............................] - ETA: 17s - loss: 5.4772 - accuracy: 0.0078WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0320s vs `on_train_batch_end` time: 0.0579s). Check your callbacks.
>
>391/391 [==============================] - 26s 67ms/step - loss: 4.5878 - accuracy: 0.0207 - val_loss: 4.7042 - val_accuracy: 0.0134
>
>Epoch 2/250
>
>391/391 [==============================] - 26s 67ms/step - loss: 4.2055 - accuracy: 0.0522 - val_loss: 4.2270 - val_accuracy: 0.0538
>
>Epoch 3/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 3.8648 - accuracy: 0.0883 - val_loss: 4.1179 - val_accuracy: 0.0814
>
>Epoch 4/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 3.5519 - accuracy: 0.1421 - val_loss: 3.8452 - val_accuracy: 0.1325
>Epoch 5/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 3.2509 - accuracy: 0.1952 - val_loss: 3.3625 - val_accuracy: 0.1882
>
>Epoch 6/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.9928 - accuracy: 0.2408 - val_loss: 3.2708 - val_accuracy: 0.2161
>
>Epoch 7/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.7977 - accuracy: 0.2809 - val_loss: 2.7619 - val_accuracy: 0.3035
>
>Epoch 8/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.6131 - accuracy: 0.3187 - val_loss: 2.5414 - val_accuracy: 0.3501
>
>Epoch 9/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.4598 - accuracy: 0.3517 - val_loss: 2.7046 - val_accuracy: 0.3255
>
>Epoch 10/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.3132 - accuracy: 0.3882 - val_loss: 2.2640 - val_accuracy: 0.4070
>
>Epoch 11/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.1848 - accuracy: 0.4189 - val_loss: 2.1943 - val_accuracy: 0.4327
>
>Epoch 12/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.0751 - accuracy: 0.4445 - val_loss: 2.2010 - val_accuracy: 0.4361
>
>Epoch 13/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.9770 - accuracy: 0.4687 - val_loss: 2.1503 - val_accuracy: 0.4551
>
>Epoch 14/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.8800 - accuracy: 0.4931 - val_loss: 2.1343 - val_accuracy: 0.4603
>
>Epoch 15/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.7966 - accuracy: 0.5125 - val_loss: 2.0326 - val_accuracy: 0.4885
>
>Epoch 16/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.7115 - accuracy: 0.5345 - val_loss: 2.0095 - val_accuracy: 0.4921
>
>Epoch 17/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.6370 - accuracy: 0.5557 - val_loss: 1.9143 - val_accuracy: 0.5168
>
>Epoch 18/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.5570 - accuracy: 0.5735 - val_loss: 1.8116 - val_accuracy: 0.5317
>
>Epoch 19/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.5038 - accuracy: 0.5871 - val_loss: 1.7452 - val_accuracy: 0.5520
>
>Epoch 20/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.4433 - accuracy: 0.6041 - val_loss: 1.8036 - val_accuracy: 0.5433
>
>Epoch 21/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.3753 - accuracy: 0.6204 - val_loss: 1.8993 - val_accuracy: 0.5321
>
>Epoch 22/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.3242 - accuracy: 0.6343 - val_loss: 1.9099 - val_accuracy: 0.5382
>
>Epoch 23/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.2704 - accuracy: 0.6474 - val_loss: 1.7647 - val_accuracy: 0.5667
>
>Epoch 24/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.2367 - accuracy: 0.6576 - val_loss: 1.7773 - val_accuracy: 0.5657
>
>Epoch 25/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.1795 - accuracy: 0.6715 - val_loss: 1.7160 - val_accuracy: 0.5766
>
>Epoch 26/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.1373 - accuracy: 0.6827 - val_loss: 1.7304 - val_accuracy: 0.5774
>
>Epoch 27/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.1082 - accuracy: 0.6923 - val_loss: 1.9430 - val_accuracy: 0.5465
>
>Epoch 28/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.0601 - accuracy: 0.7011 - val_loss: 1.8539 - val_accuracy: 0.5669
>
>Epoch 29/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.0185 - accuracy: 0.7152 - val_loss: 1.7887 - val_accuracy: 0.5778
>
>Epoch 30/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.9888 - accuracy: 0.7230 - val_loss: 1.7522 - val_accuracy: 0.5884
>
>Epoch 31/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.9584 - accuracy: 0.7310 - val_loss: 1.7597 - val_accuracy: 0.5903
>
>Epoch 32/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.9328 - accuracy: 0.7392 - val_loss: 1.7132 - val_accuracy: 0.5991
>
>Epoch 33/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8958 - accuracy: 0.7499 - val_loss: 1.7338 - val_accuracy: 0.6036
>
>Epoch 34/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8724 - accuracy: 0.7571 - val_loss: 1.7104 - val_accuracy: 0.6079
>
>Epoch 35/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8450 - accuracy: 0.7624 - val_loss: 1.7668 - val_accuracy: 0.6038
>
>Epoch 36/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8050 - accuracy: 0.7744 - val_loss: 1.9853 - val_accuracy: 0.5697
>
>Epoch 37/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8056 - accuracy: 0.7736 - val_loss: 1.8849 - val_accuracy: 0.5859
>
>Epoch 38/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.7700 - accuracy: 0.7839 - val_loss: 1.8189 - val_accuracy: 0.6049
>
>Epoch 39/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.7545 - accuracy: 0.7874 - val_loss: 1.8237 - val_accuracy: 0.5989
>
>Epoch 40/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.7337 - accuracy: 0.7918 - val_loss: 1.8901 - val_accuracy: 0.5918
>
>Epoch 41/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.7108 - accuracy: 0.8002 - val_loss: 1.8254 - val_accuracy: 0.6090
>
>Epoch 42/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.6897 - accuracy: 0.8039 - val_loss: 1.8526 - val_accuracy: 0.6094
>
>Epoch 43/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.6723 - accuracy: 0.8099 - val_loss: 1.9535 - val_accuracy: 0.5924
>
>Epoch 44/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.6665 - accuracy: 0.8138 - val_loss: 1.8447 - val_accuracy: 0.6037
>313/313 - 3s - loss: 1.8447 - accuracy: 0.6037
>0.6036999821662903

EDIT 2: I understand that my architecture is flawed, and any help is appreciated, but I would kindly ask if you could help with the ImageDataGenerator problem for which I consider to have provided all relevant information.编辑 2:我知道我的架构有缺陷,任何帮助表示赞赏,但我想请问您是否可以帮助解决我认为已提供所有相关信息的 ImageDataGenerator 问题。

The only problem I see is that you've configured shuffle to be off on the generator.我看到的唯一问题是您已将 shuffle 配置为在生成器上关闭。 Everything else is fine.其他一切都很好。 There is no change in images or labels.图像或标签没有变化。 Note that by default, model.fit will shuffle input data, but when you use a generator you must configure the generator to shuffle.请注意,默认情况下,model.fit 会对输入数据进行混洗,但是当您使用生成器时,您必须将生成器配置为混洗。 As such, when you did not provide a generator the input data was being shuffled and when you used the generator it was not being shuffled.因此,当您没有提供生成器时,输入数据会被打乱,而当您使用生成器时,它不会被打乱。 This problem seems similar to what was encountered in this question that addressed differences between fit and the now deprecated fit_generator methods.这个问题似乎类似于在遇到这个问题是解决配合,并且现在已经过时fit_generator方法之间的差异。

Here is a copy of the notebook that I used to tinker around with. 是我曾经摆弄过的笔记本的副本。

When configured to shuffle the generator, I get reasonable convergence that is not dissimilar from providing the images directly.当配置为 shuffle 生成器时,我得到了合理的收敛,这与直接提供图像没有什么不同。

With Generator带发电机

Epoch 1/10
391/391 [==============================] - 27s 69ms/step - loss: 4.6436 - accuracy: 0.0140 - val_loss: 4.6110 - val_accuracy: 0.0084
Epoch 2/10
391/391 [==============================] - 26s 67ms/step - loss: 4.4306 - accuracy: 0.0373 - val_loss: 4.5542 - val_accuracy: 0.0237
Epoch 3/10
391/391 [==============================] - 26s 67ms/step - loss: 4.2667 - accuracy: 0.0587 - val_loss: 4.2758 - val_accuracy: 0.0590
Epoch 4/10
391/391 [==============================] - 26s 67ms/step - loss: 4.1242 - accuracy: 0.0798 - val_loss: 4.1708 - val_accuracy: 0.0725
Epoch 5/10
391/391 [==============================] - 26s 68ms/step - loss: 4.0096 - accuracy: 0.0980 - val_loss: 3.9277 - val_accuracy: 0.1188
Epoch 6/10
391/391 [==============================] - 26s 68ms/step - loss: 3.8621 - accuracy: 0.1288 - val_loss: 4.5334 - val_accuracy: 0.0659
Epoch 7/10
391/391 [==============================] - 26s 68ms/step - loss: 3.7434 - accuracy: 0.1460 - val_loss: 4.5092 - val_accuracy: 0.0835
Epoch 8/10
391/391 [==============================] - 27s 68ms/step - loss: 3.5886 - accuracy: 0.1769 - val_loss: 3.8606 - val_accuracy: 0.1486
Epoch 9/10
391/391 [==============================] - 27s 69ms/step - loss: 3.4937 - accuracy: 0.1975 - val_loss: 3.9907 - val_accuracy: 0.1236
Epoch 10/10
391/391 [==============================] - 27s 70ms/step - loss: 3.3655 - accuracy: 0.2190 - val_loss: 3.4287 - val_accuracy: 0.2202

Without Generator不带发电机

Epoch 1/10
391/391 [==============================] - 27s 69ms/step - loss: 4.6348 - accuracy: 0.0154 - val_loss: 4.5947 - val_accuracy: 0.0152
Epoch 2/10
391/391 [==============================] - 27s 68ms/step - loss: 4.4097 - accuracy: 0.0398 - val_loss: 4.4070 - val_accuracy: 0.0419
Epoch 3/10
391/391 [==============================] - 26s 68ms/step - loss: 4.2278 - accuracy: 0.0643 - val_loss: 4.4637 - val_accuracy: 0.0481
Epoch 4/10
391/391 [==============================] - 26s 68ms/step - loss: 4.0906 - accuracy: 0.0842 - val_loss: 4.3163 - val_accuracy: 0.0625
Epoch 5/10
391/391 [==============================] - 27s 68ms/step - loss: 3.9589 - accuracy: 0.1033 - val_loss: 4.3802 - val_accuracy: 0.0693
Epoch 6/10
391/391 [==============================] - 27s 68ms/step - loss: 3.8119 - accuracy: 0.1318 - val_loss: 3.8241 - val_accuracy: 0.1345
Epoch 7/10
391/391 [==============================] - 26s 68ms/step - loss: 3.7324 - accuracy: 0.1447 - val_loss: 3.6602 - val_accuracy: 0.1598
Epoch 8/10
391/391 [==============================] - 27s 68ms/step - loss: 3.6160 - accuracy: 0.1669 - val_loss: 3.6975 - val_accuracy: 0.1573
Epoch 9/10
391/391 [==============================] - 26s 68ms/step - loss: 3.4929 - accuracy: 0.1893 - val_loss: 3.5784 - val_accuracy: 0.1956
Epoch 10/10
391/391 [==============================] - 26s 68ms/step - loss: 3.4052 - accuracy: 0.2061 - val_loss: 3.3669 - val_accuracy: 0.2298

Comparisons比较

精度图 损失图

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM