简体   繁体   English

将 model 权重单独保存在 pytorch 中

[英]separately save the model weight in pytorch

I am using PyTorch to train a deep learning model.我正在使用 PyTorch 来训练深度学习 model。 I wonder if it is possible for me to separately save the model weight.我想知道是否可以单独保存 model 重量。 For example:例如:

class my_model(nn.Module):
def __init__(self):
    super(my_model, self).__init__()
    self.bert = transformers.AutoModel.from_pretrained(BERT_PATH)
    self.out = nn.Linear(768,1)
    
def forward(self, ids, mask, token_type):
    x = self.bert(ids, mask, token_type)[1]
    x = self.out(x)
    
    return x

I have the BERT model as the base model and an additional linear layer on the top.我有 BERT model 作为基础 model 和顶部的附加线性层。 After I train this model, can I save the weight for the BERT model and this linear layer separately?在我训练这个 model 之后,我可以分别保存 BERT model 和这个线性层的权重吗?

You can:你可以:

model = my_model()
# train ...
torch.save({'bert': model.bert.state_dict(), 'out': model.out.state_dict()}, 'checkpoint.pth')

Alternatively to the previous answer, You can create two separated class of nn.module.替代上一个答案,您可以创建两个分离的 class 的 nn.module。 One for the BERT model and another one for the linear layer:一个用于 BERT model,另一个用于线性层:

class bert_model(nn.Module):
  def __init__(self):
  super(bert_model, self).__init__()
  self.bert = transformers.AutoModel.from_pretrained(BERT_PATH)

  def forward(self, ids, mask, token_type):
    x = self.bert(ids, mask, token_type)[1]

    return x

class linear_layer(nn.Module):
  def __init__(self):
  super(linear_layer, self).__init__()
  self.out = nn.Linear(768,1)

  def forward(self, x):
    x = self.out(x)

    return x

Then you can save the two part of the model separately with:然后您可以分别保存 model 的两部分:

bert_model = bert_model()
linear_layer = linear_layer()
#train
torch.save(bert_model.state_dict(), PATH)
torch.save(linear_layer.state_dict(), PATH)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM