[英]Why am I running out of RAM in my runtime when running my code in Colab?
IMAGE_SIZE = 128
IMAGE_CHANNELS = 3
ImageFile.LOAD_TRUNCATED_IMAGES = True
for filename in tdqm(os.listdir(images_path)):
path = os.path.join(images_path, filename)
image = Image.open(path).resize((IMAGE_SIZE, IMAGE_SIZE), Image.NEAREST)
if (np.asarray(image).size != 49152):
bad = bad + 1
print(bad)
print(path)
picarray = np.asarray(image)
#picarray = (picarray>>16).astype(np.int16)
training_data.append(picarray)
training_data = np.reshape(training_data, (-1, IMAGE_SIZE, IMAGE_SIZE, IMAGE_CHANNELS))
#print(training_data.)
training_data = training_data / 127.5 - 1
print('saving file...')
np.save('/content/drive/MyDrive/Art/cubism_data.npy', training)
When I run the code above my session uses up all its ram and I am unsure why.当我运行上面的代码时,我的会话用完了所有的内存,我不确定为什么。 Any help would be appreciated.
任何帮助,将不胜感激。
You are uploading all your images to the RAM.您正在将所有图像上传到 RAM。 This could be the reason for the error.
这可能是错误的原因。
I don't really understand what is the end goal of this script, I am assuming that you want to upload a training set for a learning process.我真的不明白这个脚本的最终目标是什么,我假设你想上传一个学习过程的训练集。 If that is true you can use different techniques to only uploading a batch of images at a time.
如果这是真的,您可以使用不同的技术一次只上传一批图像。 You can use the tensorflow Dataset object that will take care of the work for you.
您可以使用tensorflow Dataset对象来处理您的工作。
However if you must open all the images Then you should consider compressing them or resizing them.但是,如果您必须打开所有图像,那么您应该考虑压缩它们或调整它们的大小。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.