简体   繁体   中英

how to manage gpu RAM in google colab?

When I run some DL models written with PyTorch I have the error:

RuntimeError: CUDA out of memory. Tried to allocate 108.00 MiB (GPU 0; 14.73 GiB total capacity; 13.68 GiB already allocated; 11.88 MiB free; 13.78 GiB reserved in total by PyTorch).

It's happened when I try to track some bugs and run cell over and over. Are there some lifehacks to avoid it?

Here is an example of code (I marked out cells in google colab):

def train(...):
  ...
  assert False
  ...
model = SegNet().to(device)
max_epochs = 20
optim = torch.optim.Adam(model.parameters(), lr=10e-5)
train(model, optim, bce_loss, max_epochs, data_tr, data_val)

I found operator del so I could make

del model

after used model . But how I can manage memory using after manual or some error assert ?

Operator del X_batch right after y_pred = model(X_batch...) helps me a little. Function .backward() uses the biggest bunch of RAM but I can do nothing with it. I suppose colab.google isn't relevant to RAM using.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM