Disentangled Latent Spaces Facilitate Data-Driven Auxiliary Learning

1 Bocconi University, Bocconi Institute for Data Science and Analytics, Milan, Italy 2 University of Verona, Dept. of Engineering for Innovation Medicine, Verona, Italy 3 HUMATICS - SYS-DAT Group, Verona, Italy 4 University of Milano-Bicocca, Dept. of Informatics, Systems and Communication, Milan, Italy
🎉 Accepted @ ICIAP 2025 🎉
Detaux teaser
Detaux involves two steps. 1) First, we use weakly supervised disentanglement to isolate the structural features specific to the principal task in one subspace (red rectangle at the top of the image). 2) Next, we identify the subspace with the most disentangled factor of variation related to the principal task, and through a clustering module, we obtain new labels (blue rectangle in the bottom left part of the image). These can be used to create a new classification task that can be combined with the principal task in any MTL model (bottom right part of the image).

Abstract

Auxiliary tasks facilitate learning in situations when data is scarce or the principal task of focus is extremely complex. This idea is primarily inspired by the improved generalization capability induced by solving multiple tasks simultaneously, which leads to a more robust shared representation. Nevertheless, finding optimal auxiliary tasks is a crucial problem that often requires hand-crafted solutions or expensive meta-learning approaches. In this paper, we propose a novel framework, dubbed Detaux, whereby a weakly supervised disentanglement procedure is used to discover a new unrelated auxiliary classification task, which allows us to go from a Single-Task Learning (STL) to a Multi-Task Learning (MTL) problem. The disentanglement procedure works at the representation level, isolating the variation related to the principal task into an isolated subspace and additionally producing an arbitrary number of orthogonal subspaces, each of them encouraging high separability among projections. We generate the auxiliary classification task through a clustering procedure on the most disentangled subspace, obtaining a discrete set of labels. Subsequently, the original data, the labels associated with the principal task, and the newly discovered ones can be fed into any MTL framework. Experimental validation on both synthetic and real data, along with various ablation studies, demonstrate promising results, revealing the potential in what has been, so far, an unexplored connection between learning disentangled representations and MTL.

BibTeX

@Article{skenderi2023disentangled,
  title   = {{Disentangled Latent Spaces Facilitate Data-Driven Auxiliary Learning}},
  author  = {Skenderi, Geri and Capogrosso, Luigi and Toaiari, Andrea and Denitto, Matteo and Fummi, Franco and Melzi, Simone and Cristani, Marco},
  journal = {arXiv preprint arXiv:2310.09278},
  year    = {2023}
}