[英]Keras with Tensorflow backend---Memoryerror in model.fit() with checkpoint callbacks
[英]Keras with tensorflow backend---MemoryError
我試圖按照本教程學習一些有關keras的深度學習的知識,但是我不斷收到MemoryError。 您能否指出是什么原因造成的,以及如何處理?
這是代碼:
import numpy as np
from keras import models, regularizers, layers
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
x_train = vectorize_sequences(train_data)
這是回溯(行號與上述代碼中的行號不匹配)
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/uttam/pycharm-2018.2.4/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/home/uttam/pycharm-2018.2.4/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/uttam/PycharmProjects/IMDB/imdb.py", line 33, in <module>
x_train = vectorize_sequences(train_data)
File "/home/uttam/PycharmProjects/IMDB/imdb.py", line 27, in vectorize_sequences
results = np.zeros((len(sequences), dimension))
MemoryError
是的,你是對的。 這個問題確實是由vectorize_sequences
引起的。
您應該分批執行該邏輯(使用諸如partial_x_train
類的切片數據)或使用生成器( 此處是一個很好的說明和示例)。
我希望這有幫助 :)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.