Overlap matrix concentration in optimal Bayesian inference

4 Apr 2019  ·  Jean Barbier ·

We consider models of Bayesian inference of signals with vectorial components of finite dimensionality. We show that, under a proper perturbation, these models are replica symmetric in the sense that the overlap matrix concentrates. The overlap matrix is the order parameter in these models and is directly related to error metrics such as minimum mean-square errors. Our proof is valid in the optimal Bayesian inference setting. This means that it relies on the assumption that the model and all its hyper-parameters are known so that the posterior distribution can be written exactly. Examples of important problems in high-dimensional inference and learning to which our results apply are low-rank tensor factorization, the committee machine neural network with a finite number of hidden neurons in the teacher-student scenario, or multi-layer versions of the generalized linear model.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Information Theory Disordered Systems and Neural Networks Information Theory Probability

Datasets


  Add Datasets introduced or used in this paper