Search Results for author: Liqun Zhao

Found 2 papers, 2 papers with code

Stable and Safe Human-aligned Reinforcement Learning through Neural Ordinary Differential Equations

2 code implementations23 Jan 2024 Liqun Zhao, Keyan Miao, Konstantinos Gatsis, Antonis Papachristodoulou

Reinforcement learning (RL) excels in applications such as video games, but ensuring safety as well as the ability to achieve the specified goals remains challenging when using RL for real-world problems, such as human-aligned tasks where human safety is paramount.

Reinforcement Learning (RL) Safe Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.