[英]Train a model with a task and test it with another task?
I have a data-frame consists of 3000 samples, n numbers of features, and two targets columns as follow:我的数据框由 3000 个样本、n 个特征和两个目标列组成,如下所示:
mydata:
id, f1, f2, ..., fn, target1, target2
01, 23, 32, ..., 44, 0 , 1
02, 10, 52, ..., 11, 1 , 2
03, 66, 15, ..., 65, 1 , 0
...
2000, 76, 32, ..., 17, 0 , 1
Here, I have a multi-task learning problem (I am quite new in this domain) and I want to train a model/network with target1
and test it with target2
.在这里,我有一个多任务学习问题(我在这个领域很新),我想用
target1
训练一个模型/网络并用target2
测试它。
If we consider target1
and target2
as tasks, they might be related tasks but we do not know how much.如果我们将
target1
和target2
视为任务,它们可能是相关的任务,但我们不知道有多少。 So, I want to see how much we can use the model trained by task1 ( target1
) to predict task2 ( target2
).所以,我想看看我们可以使用多少由task1(
target1
)训练的model来预测task2( target2
)。
It seems, it is not possible since target1
is a binary class (0 and 1), but target2
has more than two values (0,1 and 2).看来,这是不可能的,因为
target1
是二进制 class(0 和 1),但target2
有两个以上的值(0,1 和 2)。 Is there any way to handle this issue?有没有办法处理这个问题?
This is not called Multi-Task Learning but Transfer Learning.这不是多任务学习,而是迁移学习。 It would be multi-task learning if you had trained your model to predict both the
target1
and target2
.如果您已经训练 model 来预测
target1
和target2
,那将是多任务学习。
Yes, there are ways to handle this issue.是的,有办法处理这个问题。 The final layer of the model is just the classifier head that computes the final label from the previous layer.
model 的最后一层只是从上一层计算最终 label 的分类器头。 You can consider the output from the previous layer as embeddings of the datapoint and use this representation to train/fine-tune another model.
您可以将前一层的 output 视为数据点的嵌入,并使用此表示来训练/微调另一个 model。 You have to plug in another head, though, since you now have three classes.
但是,您必须插入另一个头,因为您现在有三个类。
so in pseudo-code, you need something like所以在伪代码中,你需要类似的东西
model = remove_last_layer(model)
model.add(<your new classification head outputting 3 classes>)
model.train()
you can then compare this approach to the baseline, where you train from scratch on target2
to analyze the transfer learning between these two tasks.然后,您可以将此方法与基线进行比较,在基线上您从头开始训练
target2
以分析这两个任务之间的迁移学习。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.