no code implementations • 29 Mar 2024 • Yuki Akiyama, Minh Vu, Konstantinos Slavakis
This paper designs novel nonparametric Bellman mappings in reproducing kernel Hilbert spaces (RKHSs) for reinforcement learning (RL).
no code implementations • 14 Sep 2023 • Yuki Akiyama, Konstantinos Slavakis
These mappings are defined in reproducing kernel Hilbert spaces (RKHSs), to benefit from the rich approximation properties and inner product of RKHSs, they are shown to belong to the powerful Hilbertian family of (firmly) nonexpansive mappings, regardless of the values of their discount factors, and possess ample degrees of design freedom to even reproduce attributes of the classical Bellman mappings and to pave the way for novel RL designs.
no code implementations • 21 Oct 2022 • Yuki Akiyama, Minh Vu, Konstantinos Slavakis
This paper introduces a solution to the problem of selecting dynamically (online) the ``optimal'' p-norm to combat outliers in linear adaptive filtering without any knowledge on the probability density function of the outliers.
no code implementations • 20 Oct 2022 • Minh Vu, Yuki Akiyama, Konstantinos Slavakis
This study addresses the problem of selecting dynamically, at each time instance, the ``optimal'' p-norm to combat outliers in linear adaptive filtering without any knowledge on the potentially time-varying probability distribution function of the outliers.