简体   繁体   中英

Train a model with a task and test it with another task?

I have a data-frame consists of 3000 samples, n numbers of features, and two targets columns as follow:

mydata:
       id,   f1, f2, ..., fn, target1, target2
       01,   23, 32, ..., 44,   0    ,  1
       02,   10, 52, ..., 11,   1    ,  2
       03,   66, 15, ..., 65,   1    ,  0
                     ...
       2000, 76, 32, ..., 17,   0    ,  1

Here, I have a multi-task learning problem (I am quite new in this domain) and I want to train a model/network with target1 and test it with target2 .

If we consider target1 and target2 as tasks, they might be related tasks but we do not know how much. So, I want to see how much we can use the model trained by task1 ( target1 ) to predict task2 ( target2 ).

It seems, it is not possible since target1 is a binary class (0 and 1), but target2 has more than two values (0,1 and 2). Is there any way to handle this issue?

This is not called Multi-Task Learning but Transfer Learning. It would be multi-task learning if you had trained your model to predict both the target1 and target2 .

Yes, there are ways to handle this issue. The final layer of the model is just the classifier head that computes the final label from the previous layer. You can consider the output from the previous layer as embeddings of the datapoint and use this representation to train/fine-tune another model. You have to plug in another head, though, since you now have three classes.

so in pseudo-code, you need something like

model = remove_last_layer(model)
model.add(<your new classification head outputting 3 classes>)
model.train()

you can then compare this approach to the baseline, where you train from scratch on target2 to analyze the transfer learning between these two tasks.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM