简体   繁体   中英

Access the output of several layers of pretrained DistilBERT model

I am trying to access the output embeddings from several different layers of the pretrained "DistilBERT" model. ("distilbert-base-uncased")

bert_output = model(input_ids, attention_mask=attention_mask)

The bert_output seems to return only the embedding values of the last layer for the input tokens.

If you want to get the output of all the hidden layers, you need to add the output_hidden_states=True kwarg to your config.

Your code will look something like

from transformers import DistilBertModel, DistilBertConfig

config = DistilBertConfig.from_pretrained('distilbert-base-cased', output_hidden_states=True)
model = DistilBertModel.from_pretrained('distilbert-base-cased', config=config)

The hidden layers will be made available as bert_output[2]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM