Paper

Multiview Representation Learning for a Union of Subspaces

Canonical correlation analysis (CCA) is a popular technique for learning representations that are maximally correlated across multiple views in data. In this paper, we extend the CCA based framework for learning a multiview mixture model. We show that the proposed model and a set of simple heuristics yield improvements over standard CCA, as measured in terms of performance on downstream tasks. Our experimental results show that our correlation-based objective meaningfully generalizes the CCA objective to a mixture of CCA models.

Results in Papers With Code
(↓ scroll down to see all results)