no code implementations • 30 May 2022 • Moad Abudia, Joel A. Rosenfeld, Rushikesh Kamalapurkar
This paper concerns identification of uncontrolled or closed loop nonlinear systems using a set of trajectories that are generated by the system in a domain of attraction.
no code implementations • 4 Apr 2022 • S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar
Safe model-based reinforcement learning techniques based on a barrier transformation have recently been developed to address this problem.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • 1 Oct 2021 • S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar
The ability to learn and execute optimal control policies safely is critical to realization of complex autonomy, especially where task restarts are not available and/or the systems are safety-critical.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • 31 May 2021 • Efrain Gonzalez, Moad Abudia, Michael Jury, Rushikesh Kamalapurkar, Joel A. Rosenfeld
This manuscript revisits theoretical assumptions concerning dynamic mode decomposition (DMD) of Koopman operators, including the existence of lattices of eigenfunctions, common eigenfunctions between Koopman operators, and boundedness and compactness of Koopman operators.
no code implementations • 31 May 2021 • Moad Abudia, Tejasvi Channagiri, Joel A. Rosenfeld, Rushikesh Kamalapurkar
As the fundamental basis elements leveraged in approximation, higher order control occupation kernels represent iterated integration after multiplication by a given controller in a vector valued reproducing kernel Hilbert space.