1 code implementation • 27 Feb 2024 • Hong T. M. Chu, Subhro Ghosh, Chi Thanh Lam, Soumendu Sundar Mukherjee
In this paper, we explore this problem in the context of more realistic neural networks with a general class of non-linear activation functions, and rigorously demonstrate the implicit regularization phenomenon for such networks in the setting of matrix sensing problems, together with rigorous rate guarantees that ensure exponentially fast convergence of gradient descent. In this vein, we contribute a network architecture called Spectral Neural Networks (abbrv.
no code implementations • 22 Feb 2022 • Xu Cai, Chi Thanh Lam, Jonathan Scarlett
In this paper, we study error bounds for {\em Bayesian quadrature} (BQ), with an emphasis on noisy settings, randomized algorithms, and average-case performance measures.