[英]Keras callbacks in custom epoch loop
I use keras to train an LSTM.我使用 keras 来训练 LSTM。 The input sequences are of different length.
输入序列的长度不同。 Lets say the sequences have lengths between 1 and
num_seq
.假设序列的长度在 1 和
num_seq
之间。 Therefore, I group the sequences by length in each epoch in order to use a batch size > 1:因此,我在每个时期按长度对序列进行分组,以便使用大于 1 的批量大小:
for epoch in xrange(nb_epochs):
for i in range(1,num_seq):
X,y = get_sequences(length=i)
model.fit(X,y,batch_size=100,epochs=1, validation_split=0.1, callbacks=None)
Because I use a custom loop over the epochs, callbacks which use the epoch information do not work properly (eg the tensorboard, history, etc).因为我在纪元上使用自定义循环,所以使用纪元信息的回调无法正常工作(例如张量板、历史记录等)。 What would be a way around this problem?
解决这个问题的方法是什么? Is there a way to tell the fit function, which epoch it currently does?
有没有办法告诉 fit 函数,它目前在哪个时期?
When doing manipulation on your training data during training you should use model.train_on_batch
incrementally or - better yet - use fit_generator
which lets you define a python generator that produces (x,y)
tuples for each batch.在训练期间对训练数据进行操作时,您应该增量使用
model.train_on_batch
或者 - 更好的是 - 使用fit_generator
,它可以让你定义一个 python 生成器,为每个批次生成(x,y)
元组。 This then takes care of the proper invocation of callbacks as well.然后,这也会处理回调的正确调用。
For example:例如:
def train_gen():
while True:
for i in range(1,num_seq):
X,y = get_sequences(length=i)
yield X, y
model.fit_generator(train_gen, steps_per_epoch=num_seq)
The downside of this is that you have to do the batching yourself and also have to supply the validation split yourself which you can do with a generator as well (therefore you can reuse most of the code).这样做的缺点是您必须自己进行批处理,还必须自己提供验证拆分,您也可以使用生成器来完成(因此您可以重用大部分代码)。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.