简体   繁体   中英

Keras with tensorflow backend---MemoryError

I am trying to follow this tutorial to learn a bit about deep learning with keras, however I keep getting MemoryError. Can you please point out what is causing it and how to take care of it?

Here is the code:

import numpy as np
from keras import models, regularizers, layers
from keras.datasets import imdb

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

def vectorize_sequences(sequences, dimension=10000):
    results = np.zeros((len(sequences), dimension))
    for i, sequence in enumerate(sequences):
        results[i, sequence] = 1.
    return results


x_train = vectorize_sequences(train_data)

Here is the traceback (line number doesn't match the line number from the code mentioned above)

Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/home/uttam/pycharm-2018.2.4/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
  File "/home/uttam/pycharm-2018.2.4/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/home/uttam/PycharmProjects/IMDB/imdb.py", line 33, in <module>
    x_train = vectorize_sequences(train_data)
  File "/home/uttam/PycharmProjects/IMDB/imdb.py", line 27, in vectorize_sequences
    results = np.zeros((len(sequences), dimension))
MemoryError

Yes, you are correct. The problem does arise from vectorize_sequences .

You should do that logic in batches (with slicing data like for partial_x_train ) or use generators ( here is a good explanation and example).

I hope this helps :)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM