1 code implementation • 4 Dec 2022 • Ruggiero Seccia, Corrado Coppola, Giampaolo Liuzzi, Laura Palagi
In this work, we consider minimizing the average of a very large number of smooth and possibly non-convex functions, and we focus on two widely used minibatch frameworks to tackle this optimization problem: Incremental Gradient (IG) and Random Reshuffling (RR).
no code implementations • 18 Mar 2020 • Laura Palagi, Ruggiero Seccia
Deep Feedforward Neural Networks' (DFNNs) weights estimation relies on the solution of a very large nonconvex optimization problem that may have many local (no global) minimizers, saddle points and large plateaus.
no code implementations • 17 Jun 2019 • Francesco Foglino, Matteo Leonetti, Simone Sagratella, Ruggiero Seccia
Curriculum learning is often employed in deep reinforcement learning to let the agent progress more quickly towards better behaviors.