简体   繁体   中英

How to Use GPU RAM in Google Colab efficiently?

I am designing a multi_label_image_Classifier . For this i need to load train_images of around 7867 nos. While I am loading the images the RAM usage increases from 0.92 to 12.5 GB.

After Loading when I am fitting the images into a numpy array RAM uses total available size ie 25.54 GB and the code stop executing with an error "your session crashed" .

Sample code which I am using

train_images= []
for i in tqdm(range(train.shape[0)):
    img = image.load_img(
        '/content/Multi_Label_dataset/Images/'+train['Id'][i]+'.jpg',
        target_size=(400,400,3)
        )
    img = image.img_to_array(img)
    img = img/255
    train_image.append(img)

Upto the above RAM usage was 12.52 GB

X= np.array(train_image)

While Executing this line, RAM usage becomes red and "Session Crashed Message" popped up.

How to handle this???

Your dataset is to large to be loaded into the RAM all at once. This is a common case when using image datasets. Along with the dataset, the RAM also need to hold the model, other variables and additional space for processing.

To help with loading you can make use of data_generators() and flow_from_directory() . These methods are available in Keras, have a look at the documentation .

The data_generator() takes care of all the image pre-precessing such as reshaping and normalizing. The flow_from_directory() will help solve your memory issue. It dynamically loads a batch of images from the specified directory and the passes them to the model after applying the pre-processing techniques.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM