简体   繁体   中英

In pytorch, I want to save the the output in every epoch for late caculation. But it leads to OUT OF MEMORY ERROR after several epochs,

In pytorch, I want to save the output in every epoch for late caculation. But it leads to OUT OF MEMORY ERROR after several epochs. The code is like below:

    L=[]
    optimizer.zero_grad()
    for i, (input, target) in enumerate(train_loader):
        output = model(input)
        L.append(output)
    *** updata my model to minimize a loss function. List L will be used here. 

I know the reason is because pytorch save all computation graphs from every epoch. But the loss function can only be calculated after obtaining all of the prediction results

Is there a way I can train my model?

are you training on a GPU?

If so, you could move it main memory like

    L.append(output.detach().cpu())

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM