Unsupervised multi-task domain adaptation

Shih Min Yang, Mei Chen Yeh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)


With abundant labeled data, deep convolutional neural networks have shown great success in various image recognition tasks. However, these models are often less powerful when applied to novel datasets due to a phenomenon known as domain shift. Unsupervised domain adaptation methods aim to address this problem, allowing deep models trained on the labeled source domain to be used on a different target domain (without labels). In this paper, we investigate whether the generalization ability of an unsupervised domain adaptation method can be improved through multi-task learning, with learned features required to be both domain invariant and discriminative for multiple different but relevant tasks. Experiments evaluating two fundamental recognition tasks-image recognition and segmentation-show that the generalization ability empowered by multi-task learning may not benefit recognition when the model is directly applied on the target domain, but the multi-task learning setting can boost the performance of state-of-the-art unsupervised domain adaptation methods by a non-negligible margin.

Original languageEnglish
Title of host publicationProceedings of ICPR 2020 - 25th International Conference on Pattern Recognition
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages7
ISBN (Electronic)9781728188089
Publication statusPublished - 2020
Event25th International Conference on Pattern Recognition, ICPR 2020 - Virtual, Milan, Italy
Duration: 2021 Jan 102021 Jan 15

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651


Conference25th International Conference on Pattern Recognition, ICPR 2020
CityVirtual, Milan

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Unsupervised multi-task domain adaptation'. Together they form a unique fingerprint.

Cite this