简体   繁体   中英

Pytorch autograd function backward is doesn't work ( which is output 0 of MmBackward, is at version 1; expected version 0 instead)

I'm making a model mixing Fine-tuning CLIP model & Freezing clip model. And I make a custom loss using kl_loss and CEE

        with torch.no_grad():
            zero_shot_image_features = zero_shot_model.encode_image(input_image)
            zero_shot_context_text_features = zero_shot_model.encode_text(context_label_text)

            zero_shot_image_features /= zero_shot_image_features.norm(dim=-1, keepdim=True)
            zero_shot_context_text_features /= zero_shot_context_text_features.norm(dim=-1, keepdim=True)
            zero_shot_output_context = (zero_shot_image_features @ zero_shot_context_text_features.T).softmax(dim=-1)
        
        
        fine_tunning_image_features = fine_tunning_model.encode_image(input_image)
        fine_tunning_context_text_features = fine_tunning_model.encode_text(context_label_text)
        
        fine_tunning_image_features /= fine_tunning_image_features.norm(dim=-1, keepdim=True)
        fine_tunning_context_text_features /= fine_tunning_context_text_features.norm(dim=-1, keepdim=True)
        fine_tunning_output_context = (fine_tunning_image_features @ fine_tunning_context_text_features.T).softmax(dim=-1)
        
        
        fine_tunning_label_text_features = fine_tunning_model.encode_text(label_text)
        fine_tunning_label_text_features /= fine_tunning_label_text_features.norm(dim=-1, keepdim=True)
        fine_tunning_output_label = (fine_tunning_image_features @ fine_tunning_label_text_features.T).softmax(dim=-1)

        optimizer_zeroshot.zero_grad()
        optimizer_finetunning.zero_grad()
        
        loss.backward(retain_graph=True)

def custom_loss(zero_shot_output_context, fine_output_context, fine_output_label, target, alpha):
\# Compute the cross entropy loss
ce_loss = F.cross_entropy(fine_output_label, target)

    # Compute ce_loss KL divergence between the output and the target    
    kl_loss = F.kl_div(zero_shot_output_context.log(), fine_output_context.log(), reduction = 'batchmean').requires_grad_(True)
    
    
    final_loss = (ce_loss + alpha * kl_loss)
    
    return final_loss

RuntimeError  Traceback (most recent call last) Cell In[18], line 81 78 optimizer2.zero_grad() 79 optimizer.zero_grad() ---> 81 loss.backward(retain_graph=True) 83 if device == "cpu": 84     optimizer.step()

File ~/anaconda3/envs/sh_clip/lib/python3.8/site-packages/torch/tensor.py:221, in Tensor.backward(self, gradient, retain_graph, create_graph) 213 if type(self) is not Tensor and has_torch_function(relevant_args): 214     return handle_torch_function( 215         Tensor.backward, 216         relevant_args, (...) 219         retain_graph=retain_graph, 220         create_graph=create_graph) --> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)

File ~/anaconda3/envs/sh_clip/lib/python3.8/site-packages/torch/autograd/init.py:130, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 127 if retain_graph is None: 128     retain_graph = create_graph --> 130 Variable.execution_engine.run_backward( 131     tensors, grad_tensors, retain_graph, create_graph, 132     allow_unreachable=True)

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [6, 1024]], which is output 0 of MmBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). 

But when I have train model, backward() function dosen't work,,,,, how to fix it??


You use 'a /= b' which is an inplace operation, it will work well if you change it to 'a = a/b'.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM