简体   繁体   中英

Wav2Vec pytorch element 0 of tensors does not require grad and does not have a grad_fn

I am retraining a wav2vec model from hugging face for classification problem. I have 5 classes and the input is a list of tensors [1,400]. Here is how I am getting the model

num_labels = 5
model_name = "Zaid/wav2vec2-large-xlsr-53-arabic-egyptian"
model_config = AutoConfig.from_pretrained(model_name, num_labels=num_labels)  ##needed for the visualizations
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name, config=model_config)

Here is the model updated settings

# Freeze the pre trained parameters
for param in model.parameters():
    param.requires_grad = False
criterion = nn.MSELoss().to(device)
optimizer = AdamW(model.parameters(), lr=2e-5, eps=1e-6)

# Add three new layers at the end of the network
model.classifier = nn.Sequential(
    nn.Linear(768, 256),
    nn.Dropout(0.25),
    nn.ReLU(),
    nn.Linear(256, 64),
    nn.Dropout(0.25),
    nn.ReLU(),
    nn.Linear(64, 2),
    nn.Dropout(0.25),
    nn.Softmax(dim=1)
)

Then the training loop

print_every = 300

total_loss = 0
all_losses = []
model.train()
for epoch in range(2):
    print("Epoch number: ", epoch)
    for row in range(16918):
        Input = torch.tensor(trn_ivectors[row]).double()
        label = torch.tensor(trn_labels[row]).long().to(device)
        label = torch.unsqueeze(label,0).to(device)
        #print("Label", label.shape)
        Input = torch.unsqueeze(Input,1).to(device)
        #print(Input.shape)
        optimizer.zero_grad()
        
        #Input.requires_grad = True
        Input = F.softmax(Input[0], dim=-1)
        
        if label == 0:
            label = torch.tensor([1.0, 0.0]).float().to(device)
        elif label == 1:
            label = torch.tensor([0.0, 1.0]).float().to(device)

        # print(overall_output, label)

        loss = criterion(Input, label)
        total_loss += loss.item()

        loss.backward()
        optimizer.step()

        if idx % print_every == 0 and idx > 0:
            average_loss = total_loss / print_every
            print("{}/{}. Average loss: {}".format(idx, len(train_data), average_loss))
            all_losses.append(average_loss)
            total_loss = 0

torch.save(model.state_dict(), "model_after_train.pt")

Unfortunately when I try to train the program it gives me the following error

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Please I would appreciate if you could tell me how to fix this error. I have been searching a lot on a way fixing it but didn't fix it

Thanks

Please try adding

requires_grad = True

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM