Paper

Does Adam optimizer keep close to the optimal point?

The adaptive optimizer for training neural networks has continually evolved to overcome the limitations of the previously proposed adaptive methods. Recent studies have found the rare counterexamples that Adam cannot converge to the optimal point. Those counterexamples reveal the distortion of Adam due to a small second momentum from a small gradient. Unlike previous studies, we show Adam cannot keep closer to the optimal point for not only the counterexamples but also a general convex region when the effective learning rate exceeds the certain bound. Subsequently, we propose an algorithm that overcomes Adam's limitation and ensures that it can reach and stay at the optimal point region.

Results in Papers With Code
(↓ scroll down to see all results)