简体   繁体   English

受过训练的keras模型比预测训练要慢得多

[英]Trained keras model much slower making its predictions than in training

I trained a keras model overnight, and got 75% accuracy which I am happy with right now. 我一夜之间训练了一个keras模型,准确度达到75%,我现在很高兴。 It has 60,000 samples, each with a sequence length of 700, and a vocabulary of 30. Each epoch takes about 10 minutes on my gpu. 它有60,000个样本,每个样本的序列长度为700,词汇量为30.每个epoch在我的gpu上大约需要10分钟。 So that's 60,000 / 600 seconds which is roughly 100 samples per second, and that has to include back propagation. 因此,这是60,000 / 600秒,大约每秒100个样本,并且必须包括反向传播。 So I saved my hdf5 file and loaded it again. 所以我保存了我的hdf5文件并再次加载。

<code>#Model:
model = Sequential() 
model.add(LSTM(128, input_shape=(X.shape[1], X.shap[2]), return_sequences=True)) model.add(Dropout(0.25)) model.add(LSTM(64)) model.add(Dropout(0.25)) model.add(Dense(y.shape[1], activation='softmax'))
</code>

When I then make my predictions it is taking more like 1 second per prediction which is 100 times slower than training. 然后当我做出我的预测时,每次预测花费的时间比1秒多,比训练慢100倍。 The predictions are good, I've looked at small batches and I can use them. 预测很好,我看了小批量,我可以使用它们。 The problem is that I need many 100,000s of them. 问题是我需要10万多个。 10ms second per prediction would work, 1 second won't. 每次预测10ms秒可以工作,1秒不会。

Can anyone suggest ways of speeding up Keras predictions? 谁能提出加速Keras预测的方法?

I think it's because Keras's default predict behavior is with batch size 32. As a result especially if you're using a GPU, the small batch sizes destroy the performance. 我认为这是因为Keras的默认预测行为是批量大小32.特别是如果您使用GPU,小批量大小会破坏性能。 If you just change the batch size to predict(X_test, batch_size=128) you'll get significantly faster performance. 如果您只是将批量大小更改为预测(X_test,batch_size = 128),您将获得明显更快的性能。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM