I followed this tutorial to generate data on-the-fly with the fit_generator()
Keras method, to train my Neural Network model.
I created a generator by using the keras.utils.Sequence
class .The call to fit_generator()
is:
history = model.fit_generator(generator=EVDSSequence(images_train, TRAIN_BATCH_SIZE, INPUT_IMG_DIR, INPUT_JSON_DIR, SPLIT_CHAR, sizeArray, NCHW, shuffle=True),
steps_per_epoch=None, epochs=EPOCHS,
validation_data=EVDSSequence(images_valid, VALID_BATCH_SIZE, INPUT_IMG_DIR, INPUT_JSON_DIR, SPLIT_CHAR, sizeArray, NCHW, shuffle=True),
validation_steps=None,
callbacks=callbacksList, verbose=1,
workers=0, max_queue_size=1, use_multiprocessing=False)
steps_per_epoch
is None
, so the number of steps per epoch is calculated by the Keras __len()__
method.
As said in the link above:
Here, the method
on_epoch_end
is triggered once at the very beginning as well as at the end of each epoch. If theshuffle
parameter is set toTrue
, we will get a new order of exploration at each pass (or just keep a linear exploration scheme otherwise).
My problem is that on_epoch_end()
method is called only at the very beginning, but never at the end of each epoch. So, at each epoch, the batch order is always the same.
I tried to use np.ceil
instead of np.floor
in __len__()
method, but with no success.
Do you know why on_epoch_end
is not called at the end of each epoch? Could you tell me any work-around to shuffle the order of my batches at the end (or at the beginning) of each epoch?
Many thanks!
I encountered the same problem. I have no idea why this happened, but there's a way to walkaround: call on_epoch_end()
within __len__()
, since __len__()
will be called every epoch.
Might be related to the issue: Keras model.fit not calling Sequence.on_epoch_end() #35911
A quick fix would be to use a LambdaCallback (note that I use fit
which should be sufficient, as fit_generator
is deprecated)
from tf.keras.callbacks import LambdaCallback
model.fit(generator, callbacks=[LambdaCallback(on_epoch_end=generator.on_epoch_end)])
Hope it helps!
And I'm finding that when you make an on_predict_end() callback_lambda, it is not called at predict end. Btw, predict() takes a callbacks=list(...) argument.
Also, it appears you can test a callback like this:
(create your 'model' object)
callback_batch_end <- callback_lambda(
on_batch_end = function(batch, logs) {
cat("Hello world\n")
}
)
callback_batch_end$on_batch_end(1, "x")
(prints 'Hello world')
callback_predict_end <- callback_lambda(
on_predict_end = function(logs) {
cat("Hello world\n")
}
)
callback_predict_end$on_predict_end("x")
(prints nothing)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.