Multi-task Learning

Created: 2022-03-24 16:59

By sharing representations between related tasks, we can enable our model to generalize better on our original task.
This can also help to avoid overfitting by generalizing the shared hidden representations, it can somehow help in the explanation od the output (by looking at the auxiliary tasks) and also auxiliary tasks provides an implicit data augmentation to alleviate the #sparsity problem.

In this survey several approaches are presented.

References

  1. https://ruder.io/multi-task/

Tags

#multi-task