![](/img/trans.png)
[英]No backports error while retraining Tensorflow's Inception v3 model
[英]Error: Train inception v3 in tensorflow 2
我是 tensorflow(2.3.0 版)的新手。 我想使用 inception v3 构建基于“oxford_flower102”的图像分类器。 我已经准备好数据集,现在想要训练 inception v3 网络,但我收到一个我不明白的错误。 错误代码是:
ValueError: 层 conv2d 的输入 0 与层不兼容:预期 min_ndim=4,发现 ndim=3。 收到完整形状:[500, 667, 3]
数据集的预处理似乎很好,当我想使用命令将数据输入到 inception v3 网络时出现错误
预测 = 模型(图像,训练 = True)
在这里你可以找到我的全部代码:
import warnings
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
import os
import tensorflow as tf
warnings.filterwarnings('ignore')
EPOCHS = 50
BATCH_SIZE = 8
NUM_CLASSES = 102
image_height = 299
image_width = 299
channels = 3
save_model_dir = os.getcwd()
def get_model():
model = tf.keras.applications.InceptionV3(include_top=True,weights=None,classes=NUM_CLASSES)
model.build(input_shape=(None, image_height, image_width, channels))
return model
def get_loss_object():
return tf.keras.losses.SparseCategoricalCrossentropy()
def get_optimizer():
return tf.keras.optimizers.Adadelta()
def get_train_loss():
return tf.keras.metrics.Mean(name='train_loss')
def get_train_accuracy():
return tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
def get_valid_loss():
return tf.keras.metrics.Mean(name='valid_loss')
def get_valid_accuracy():
return tf.keras.metrics.SparseCategoricalAccuracy(name='valid_accuracy')
if __name__ == '__main__':
dataset, dataset_info = tfds.load('oxford_flowers102', with_info=True, as_supervised=True)
dataset_info
test_set, training_set, validation_set = dataset['test'], dataset['train'], dataset['validation']
num_training_examples = 0
num_validation_examples = 0
num_test_examples = 0
for example in training_set:
num_training_examples += 1
for example in validation_set:
num_validation_examples += 1
for example in test_set:
num_test_examples += 1
model =get_model()
loss_object = get_loss_object()
optimizer=get_optimizer()
train_loss= get_train_loss()
train_accuracy=get_train_accuracy()
valid_loss=get_valid_loss()
valid_accuracy=get_valid_accuracy()
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images, training=True) #here I get the error
# start training
for epoch in range(EPOCHS):
train_loss.reset_states()
train_accuracy.reset_states()
valid_loss.reset_states()
valid_accuracy.reset_states()
step = 0
for images, labels in training_set:
step += 1
train_step(images,labels)
输入的形状是[none, width, height, channels] ,显然与您的 [500, 667, 3] 不兼容。
在“开始训练”块中,您尝试对每个图像和标签使用 train_step。 注意 :
for images, labels in training_set:
pass
实际上,您会得到图像以外的图像:
for image, label in training_set:
pass
你最好使用多个图像和标签进行训练,如果你真的想为每个 epoch 使用一个图像,只需重塑图像和标签。
例如,[500, 667, 3] 到 [1, 500, 667, 3] 如果你想为一个时代使用一张图像。
Keras 模型期望批量的实例作为输入。 您正在一张一张地提供图像。 如果您想训练模型,我建议您对数据集进行批处理。 你可以这样做:
dataset, dataset_info = tfds.load('oxford_flowers102', with_info=True, as_supervised=True)
dataset_info
def resize_im(im, label):
return tf.image.resize(im, [224, 224]), label
test_set, training_set, validation_set = dataset['test'], dataset['train'], dataset['validation']
training_set = training_set.map(resize_im)
test_set = test_set.map(resize_im)
validation_set = validation_set.map(resize_im)
training_set = training_set.batch(32)
test_set = test_set.batch(32)
validation_set = validation_set.batch(32)
# Then feed the images to the model
for images, labels in training_set:
train_step(images, labels)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.