简体   繁体   中英

How to use numpy memmap inside keras generator to not exceed RAM memory?

I'm trying to implement the numpy.memmap method inside a generator for training a neural network using keras in order to not exceed the memory RAM limit. I'm using as reference this post however unsuccessfully. Here is my attempt:

def My_Generator(path, batch_size, tempo, janela):
  samples_per_epoch  = sum(1 for line in np.load(path))
  number_of_batches = samples_per_epoch/batch_size
  #data = np.memmap(path, dtype='float64', mode='r+', shape=(samples_per_epoch, 18), order='F')
  data = np.load(path)
  # create a memmap array to store the output
  X_output = np.memmap('output', dtype='float64', shape=(samples_per_epoch, 96, 100, 17), mode='r+', order='F')
  y_output = np.memmap('output', dtype='float64', shape=(samples_per_epoch, 1), mode='r+', order='F')
  holder = np.zeros([batch_size, 18], dtype='float64')
  counter=0

  while 1:
    holder[:] = data[counter:batch_size+counter]
    X, y = input_3D(holder, tempo, janela) 
    lenth_X = len(X)
    lenth_y = len(y)
    print(lenth_X, lenth_y)
    y = y.reshape(-1, 1)
    X_output[0:lenth_X, :] = X
    y_output[0:lenth_y, :] = y
    counter += 1
    yield X_output[0:lenth_X, :].reshape(-1, 96, 10, 10, 17), y_output[0:lenth_y, :]
    #restart counter to yeild data in the next epoch as well
    if counter >= number_of_batches:
        counter = 0

Nonetheless, it is still holding the chunks in the RAM memory so that after some epochs it exceeds its limit.

Thanks

By following the methods here:

https://stackoverflow.com/a/61472122/2962979

You may be able to address your issue by reconstructing the memmap object each time.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM