[英]AttributeError: 'str' object has no attribute 'shape' while encoding tensor using BertModel with PyTorch (Hugging Face)
AttributeError: 'str' object has no attribute 'shape' while encoding tensor using BertModel with PyTorch (Hugging Face). AttributeError: 'str' object 在使用带有 PyTorch(拥抱脸)的 BertModel 编码张量时没有属性 'shape'。 Below is the code
下面是代码
bert_model = BertModel.from_pretrained(r'downloads\bert-pretrained-model')
input_ids
Output is: Output 是:
tensor([[ 101, 156, 13329, ..., 0, 0, 0],
[ 101, 156, 13329, ..., 0, 0, 0],
[ 101, 1302, 1251, ..., 0, 0, 0],
...,
[ 101, 25456, 1200, ..., 0, 0, 0],
[ 101, 143, 9664, ..., 0, 0, 0],
[ 101, 2586, 7340, ..., 0, 0, 0]])
Followed by code below接下来是下面的代码
last_hidden_state, pooled_output = bert_model(
input_ids=encoding['input_ids'],
attention_mask=encoding['attention_mask']
)
Followed by code below接下来是下面的代码
last_hidden_state.shape
Output is Output 是
AttributeError Traceback (most recent call last)
<ipython-input-70-9628339f425d> in <module>
----> 1 last_hidden_state.shape
AttributeError: 'str' object has no attribute 'shape'
Complete Code link is ' https://colab.research.google.com/drive/1FY4WtqCi2CQ9RjHj4slZwtdMhwaWv2-2?usp=sharing '完整的代码链接是' https://colab.research.google.com/drive/1FY4WtqCi2CQ9RjHj4slZwtdMhwaWv2-2?usp=sharing '
The issue is that the return type has changed since 3.xx version of transformers.问题是自 3.xx 版本的转换器以来返回类型已更改。 So, we have explicitly ask for a tuple of tensors.
因此,我们明确要求张量元组。
So, we can pass an additional kwarg return_dict = False
when we call the bert_model()
to get an actual tensor that corresponds to the last_hidden_state
.因此,当我们调用
bert_model()
时,我们可以传递一个额外的 kwarg return_dict = False
以获取对应于last_hidden_state
的实际张量。
last_hidden_state, pooled_output = bert_model(
input_ids=encoding['input_ids'],
attention_mask=encoding['attention_mask'],
return_dict = False # this is needed to get a tensor as result
)
In case you do not like the previous approach, then you can resort to:如果您不喜欢以前的方法,那么您可以诉诸:
In [13]: bm = bert_model(
...: encoding_sample['input_ids'],
...: encoding_sample['attention_mask']
...: )
In [14]: bm.keys()
Out[14]: odict_keys(['last_hidden_state', 'pooler_output'])
# accessing last_hidden_state
In [15]: bm['last_hidden_state']
In [16]: bm['last_hidden_state'].shape
Out[16]: torch.Size([1, 17, 768])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.