简体   繁体   English

np.concatenate的内存错误

[英]Memory error for np.concatenate

When I run the flowing code in an iPython notebook: 当我在iPython笔记本中运行流动代码时:

_x = np.concatenate([_batches.next() for i in range(_batches.samples)])

I get this error message 我收到此错误消息

---------------------------------------------------------------
MemoryError                   Traceback (most recent call last)
<ipython-input-14-313ecf2ea184> in <module>()
----> 1 _x = np.concatenate([_batches.next() for i in 
range(_batches.samples)])

MemoryError:

The iterator has 9200 elements. 迭代器有9200个元素。

next(_batch) returns a np.array of shape: (1, 400, 400, 3) next(_batch)返回一个np.array形状:(1,400,400,3)

I have 30GB RAM and 16GB GPU. 我有30GB内存和16GB GPU。

I have a similar issue when I use predict_generator() in Keras. 当我在Keras中使用predict_generator()时,我遇到了类似的问题。 I run the following code: 我运行以下代码:

bottleneck_features_train = bottleneck_model.predict_generator(batches, len(batches), verbose=1) 

I can see the progress indicator goes all the way when using verbose=1, but then I get the following error: 使用verbose = 1时,我可以看到进度指示器一直显示,但后来我收到以下错误:

2300/2300 [==============================] - 177s 77ms/step
---------------------------------------------------------------
MemoryError                   Traceback (most recent call last)
<ipython-input-19-d0e463f64f5a> in <module>()
----> 1 bottleneck_features_train = 
bottleneck_model.predict_generator(batches, len(batches), verbose=1)

~/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py in 
wrapper(*args, **kwargs)
     85                 warnings.warn('Update your `' + object_name +
     86                               '` call to the Keras 2 API: ' + 
signature, stacklevel=2)
---> 87             return func(*args, **kwargs)
     88         wrapper._original_function = func
     89         return wrapper

~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in 
predict_generator(self, generator, steps, max_queue_size, workers, 
use_multiprocessing, verbose)
   2345                 return all_outs[0][0]
   2346             else:
-> 2347                 return np.concatenate(all_outs[0])
   2348         if steps_done == 1:
   2349             return [out for out in all_outs]

MemoryError: 

Could you please advise a solution for this memory issue? 你能否为这个内存问题建议解决方案? Thank you! 谢谢!

For the first error, the data is simply too big. 对于第一个错误,数据太大了。 Assuming a data type of int64 or float64 (8 bytes per element), the total data is 9200*400*400*3*8 bytes, ie 35GB. 假设数据类型为int64或float64(每个元素8个字节),则总数据为9200 * 400 * 400 * 3 * 8字节,即35GB。 All this data is collected in chunks and then copied into a big array by the concatenation. 所有这些数据都以块的形式收集,然后通过串联复制到一个大的数组中。

You could preallocate the array and maybe it'll work: 你可以预先分配数组,也许它会工作:

x_ = np.empty((9200,400,400,3))
for i in range(9200): 
    x_[i] = batches.next()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM