no code implementations • 27 May 2024 • Iosif Lytras, Panayotis Mertikopoulos
Motivated by applications to deep learning which often fail standard Lipschitz smoothness requirements, we examine the problem of sampling from distributions that are not log-concave and are only weakly dissipative, with log-gradients allowed to grow superlinearly at infinity.
no code implementations • 15 Nov 2023 • Iosif Lytras, Sotirios Sabanis
In this article we propose a novel taming Langevin-based scheme called $\mathbf{sTULA}$ to sample from distributions with superlinearly growing log-gradient which also satisfy a Log-Sobolev inequality.
no code implementations • 19 Jan 2023 • Tim Johnston, Iosif Lytras, Sotirios Sabanis
In this article we consider sampling from log concave distributions in Hamiltonian setting, without assuming that the objective gradient is globally Lipschitz.
no code implementations • 25 Jun 2020 • Attila Lovas, Iosif Lytras, Miklós Rásonyi, Sotirios Sabanis
We offer a new learning algorithm based on an appropriately constructed variant of the popular stochastic gradient Langevin dynamics (SGLD), which is called tamed unadjusted stochastic Langevin algorithm (TUSLA).