[英]Training Accuracy not increasing - CNN with Tensorflow
我最近开始在Google Colab笔记本中使用Tensorflow进行机器学习,在网络上对食物图像进行分类。
我的数据集正好包含 101,000 张图像和 101 个类——每个 class 有 1000 张图像。 我按照这个Tensorflow博客开发的网络
我开发的代码如下:
#image dimensions
batch_size = 32
img_height = 50
img_width = 50
#80% for training, 20% for validating
train_ds = image_dataset_from_directory(data_dir,
shuffle=True,
validation_split=0.2,
subset="training",
seed=123,
batch_size=batch_size,
image_size=(img_height, img_width)
)
val_ds = image_dataset_from_directory(data_dir,
shuffle=True,
validation_split=0.2,
subset="validation",
seed=123,
batch_size=batch_size,
image_size=(img_height, img_width)
)
#autotuning, configuring for performance
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
#data augmentation layer
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
#network definition
num_classes = 101
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(256, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes, activation='softmax')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
在训练了 500 个 epoch 之后,准确率似乎变得异常缓慢:
epoch 100: 2525/2525 - 19s 8ms/step - loss: 2.8151 - accuracy: 0.3144 - val_loss: 3.1659 - val_accuracy: 0.2549
epoch 500: 2525/2525 - 21s 8ms/step - loss: 2.7349 - accuracy: 0.0333 - val_loss: 3.1260 - val_accuracy: 0.2712
我努力了:
到目前为止,上面的代码提供了最好的结果,但我仍然想知道,
这种行为是预期的吗? 是因为拥有这么大的数据集吗? 或者我的代码中是否存在任何可能阻碍学习过程的缺陷?
在您的损失中 function 删除 from_logits=True
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.