Concentration of the matrix-valued minimum mean-square error in optimal Bayesian inference

15 Jul 2019  ·  Jean Barbier ·

We consider Bayesian inference of signals with vector-valued entries. Extending concentration techniques from the mathematical physics of spin glasses, we show that the matrix-valued minimum mean-square error concentrates when the size of the problem increases. Such results are often crucial for proving single-letter formulas for the mutual information when they exist. Our proof is valid in the optimal Bayesian inference setting, meaning that it relies on the assumption that the model and all its hyper-parameters are known. Examples of inference and learning problems covered by our results are spiked matrix and tensor models, the committee machine neural network with few hidden neurons in the teacher-student scenario, or multi-layers generalized linear models.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here