繁体   English   中英

如何正确使用 Tensorflow dataset.cache()

[英]How to use Tensorflow dataset.cache() properly

我的 tensorflow 版本是 2.6.0,我尝试使用 dataset.cache(dir_1) 将我的数据集缓存在磁盘上。 但是当我使用 chached 日期集训练我的 model 时,结果证明 model.evaluate() 和 model.train(.) 之间的训练集精度不同。

all_data_dir = 'D:\\jupyterWorkingSpace\\image_data\\feiyan X'
all_data_dir = pathlib.Path(all_data_dir)
batch_size = 8
img_height = 512
img_width = 512
train_ds = tf.keras.utils.image_dataset_from_directory('D:\\jupyterWorkingSpace\\image_data\\feiyan X\\train', 
                                                       seed=123,
                                                       label_mode = 'binary',
                                                       shuffle=True,
                                                       image_size=(img_height, img_width),
                                                       batch_size=batch_size)
val_ds = tf.keras.utils.image_dataset_from_directory('D:\\jupyterWorkingSpace\\image_data\\feiyan X\\test', 
                                                       seed=123,
                                                       label_mode = 'binary',
                                                       shuffle=True,
                                                       image_size=(img_height, img_width),
                                                       batch_size=batch_size)
normalization_layer = keras.layers.Rescaling(1./255)
AUTOTUNE = tf.data.AUTOTUNE
dir_1 = './train_cache/a'
dir_2 = './val_cache/a'

normalized_train_ds_ = train_ds.cache(dir_1).shuffle(buffer_size=20).map(lambda x, y: (normalization_layer(x), y)).prefetch(AUTOTUNE)
normalized_val_ds_ = val_ds.cache(dir_2).shuffle(buffer_size=20).map(lambda x, y: (normalization_layer(x), y)).prefetch(AUTOTUNE)

...

loss = weightedBCE(normal_train_size,feiyan_train_size)


exponential_decay_fn = exponential_decay(lr0=0.0005, s=10)
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
earlystop = keras.callbacks.EarlyStopping(monitor='val_accuracy', min_delta=0.001 ,patience=20,mode='max')
optimizer = keras.optimizers.SGD(learning_rate=0.00005,momentum=0.9,nesterov=True)


model.compile(loss=loss,optimizer=optimizer,metrics=['accuracy'])
savebestmodel = keras.callbacks.ModelCheckpoint('1.h5', 
                                                monitor = 'val_accuracy', 
                                                verbose = 1, 
                                                save_best_only = True, 
                                                mode = 'auto')

history = model.fit((normalized_train_ds_),\
                     epochs=1,validation_data=(normalized_val_ds_),\
                     callbacks=[savebestmodel,earlystop,lr_scheduler])

训练精度评估精度

问题已解决。 这是因为 ResNet50 的错误冻结造成的。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM