Domain Expansion: A Latent Space Construction Framework for Multi-Task Learning

Chi-Yao Huang, Khoa Vo, Aayush Verma, Duo Lu, Yezhou Yang
Accepted to ICLR 2026
ICLR ICLR 2026 arXiv Paper GitHub Code

Demo / Teaser

*Video plays automatically (muted).*

Abstract

Training a single network with multiple objectives often leads to conflicting gradients that degrade shared representations, forcing them into a compromised state that is suboptimal for any single task—a problem we term latent representation collapse. We introduce Domain Expansion, a framework that prevents these conflicts by restructuring the latent space itself. Our framework uses a novel orthogonal pooling mechanism to construct a latent space where each objective is assigned to a mutually orthogonal subspace.

We validate our approach across diverse benchmarks—including ShapeNet, MPIIGaze, and Rotated MNIST—on challenging multi-objective problems combining classification with pose and gaze estimation. Our experiments demonstrate that this structure not only prevents collapse but also yields an explicit, interpretable, and compositional latent space where concepts can be directly manipulated.

Method Overview

Method Framework

Method overview: (a) Latent representation collapse. In standard multi-task learning, competing objectives lead to latent representation collapse, where the solution spaces for different concepts (colored ellipses) overlap in only a small, compromised region. (b) Domain Expansion. In contrast, our method assigns each concept to an orthogonal basis vector in the latent space, preventing interference and creating a structured, interpretable representation where features for each concept are clearly separated.

Citation

@inproceedings{huang2026domain,
      title={Domain Expansion: A Latent Space Construction Framework for Multi-Task Learning},
      author={Huang, Chi-Yao and Vo, Khoa and Verma, Aayush Atul and Lu, Duo and Yang, Yezhou},
      booktitle={International Conference on Learning Representations (ICLR)},
      year={2026},
      url={https://arxiv.org/abs/2601.20069}
    }