[英]Get warning : You should probably TRAIN this model on a downstream task to be able to use it for predictions and inference. when loadin finetune model
I get this message when loading a finetune model of Bert with a forward neural netword on the last layer from a checkpoint directory.当从检查点目录的最后一层加载带有前向神经网络的 Bert 微调模型时,我收到此消息。
This IS expected if you are initializing FlaubertForSequenceClassification fr om the checkpoint of a model trained on another task or with another architectu re (e.g. initializing a BertForSequenceClassification model from a BertForPreTr aining model).
- This IS NOT expected if you are initializing FlaubertForSequenceClassificatio n from the checkpoint of a model that you expect to be exactly identical (initi alizing a BertForSequenceClassification model from a BertForSequenceClassificat ion model).
Some weights of FlaubertForSequenceClassification were not initialized from the model checkpoint at /gpfswork/rech/kpf/umg16uw/results_hf/sm/checkpoint-10 and are newly initialized: ['sequence_summary.summary.weight', 'sequence_summary.s ummary.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Actually the model already trained on a huge dataset and I loaded it to perform inference on new dataset.实际上,该模型已经在一个巨大的数据集上进行了训练,我加载它以对新数据集进行推理。
model = XXXForSequenceClassification.from_pretrained(modelForClass, num_labels=3)
test_file = '/g/012.xml'
modelForClass = '/g/checkpoint-10'
test = preprare_data(PRE_TRAINED_MODEL_NAME, test_file)
pred = predict(test, test_model)
***** Running Prediction *****
Num examples = 5
Batch size = 8
0%| | 0/1 [00:00<?, ?it/s][[-0.0903191 0.18442413 -0.09337573]
[-0.08772105 0.17791435 -0.10178708]
[-0.0903393 0.18614864 -0.08101001]
[-0.08786416 0.1888753 -0.08145989]
[-0.06697702 0.1874733 -0.09423935]]
100%|████████████████████████████████████████████| 1/1 [00:00<00:00, 9.89it/s]
real 0m36.431s
not sure if this help, but I got the same error when loading an existing model using the transformers library from HuggingFace.不确定这是否有帮助,但我在使用 HuggingFace 的变形金刚库加载现有模型时遇到了同样的错误。 I fixed my error by initialising the proper library (ie I was using Tensorflow when I should have been using Pytorch) and then was able to read the model.我通过初始化正确的库修复了我的错误(即当我应该使用 Pytorch 时我使用了 Tensorflow)然后能够读取模型。 The model I was using was trained using Roberta.我使用的模型是使用 Roberta 训练的。 However, I changed the model with one using a regular Bert model.但是,我将模型更改为使用常规 Bert 模型的模型。 I hope this helps or maybe points you in the right direction.我希望这对您有所帮助,或者可能会为您指明正确的方向。 If possible, could I see the complete code?如果可能的话,我可以看到完整的代码吗?
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.