no code implementations • 4 Feb 2022 • Josue Nassar, Jennifer Brennan, Ben Evans, Kendall Lowrey
Online learning via Bayes' theorem allows new data to be continuously integrated into an agent's current beliefs.
no code implementations • ICLR 2022 • Josue Nassar, Jennifer Rogers Brennan, Ben Evans, Kendall Lowrey
Online learning via Bayes' theorem allows new data to be continuously integrated into an agent's current beliefs.
1 code implementation • 30 Jun 2021 • Motoya Ohnishi, Isao Ishikawa, Kendall Lowrey, Masahiro Ikeda, Sham Kakade, Yoshinobu Kawahara
In this work, we present a novel paradigm of controlling nonlinear systems via the minimization of the Koopman spectrum cost: a cost over the Koopman operator of the controlled dynamics.
3 code implementations • 12 Dec 2020 • Samuel Ainsworth, Kendall Lowrey, John Thickstun, Zaid Harchaoui, Siddhartha Srinivasa
We study the estimation of policy gradients for continuous-time systems with known dynamics.
1 code implementation • NeurIPS 2020 • Sham Kakade, Akshay Krishnamurthy, Kendall Lowrey, Motoya Ohnishi, Wen Sun
This work studies the problem of sequential control in an unknown, nonlinear dynamical system, where we model the underlying system dynamics as an unknown function in a known Reproducing Kernel Hilbert Space.
no code implementations • L4DC 2020 • Colin Summers, Kendall Lowrey, Aravind Rajeswaran, Siddhartha Srinivasa, Emanuel Todorov
We introduce Lyceum, a high-performance computational ecosystem for robot learning.
no code implementations • ICLR 2019 • Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, Igor Mordatch
We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning.
no code implementations • 28 Mar 2018 • Kendall Lowrey, Svetoslav Kolev, Jeremy Dao, Aravind Rajeswaran, Emanuel Todorov
Reinforcement learning has emerged as a promising methodology for training robot controllers.
Model-based Reinforcement Learning reinforcement-learning +1
1 code implementation • NeurIPS 2017 • Aravind Rajeswaran, Kendall Lowrey, Emanuel Todorov, Sham Kakade
This work shows that policies with simple linear and RBF parameterizations can be trained to solve a variety of continuous control tasks, including the OpenAI gym benchmarks.
no code implementations • NeurIPS 2015 • Igor Mordatch, Kendall Lowrey, Galen Andrew, Zoran Popovic, Emanuel V. Todorov
We present a method for training recurrent neural networks to act as near-optimal feedback controllers.