Learn Task First or Learn Human Partner First: A Hierarchical Task Decomposition Method for Human-Robot Cooperation

1 Mar 2020  ·  Lingfeng Tao, Michael Bowman, Jiucai Zhang, Xiaoli Zhang ·

Applying Deep Reinforcement Learning (DRL) to Human-Robot Cooperation (HRC) in dynamic control problems is promising yet challenging as the robot needs to learn the dynamics of the controlled system and dynamics of the human partner. In existing research, the robot powered by DRL adopts coupled observation of the environment and the human partner to learn both dynamics simultaneously. However, such a learning strategy is limited in terms of learning efficiency and team performance. This work proposes a novel task decomposition method with a hierarchical reward mechanism that enables the robot to learn the hierarchical dynamic control task separately from learning the human partner's behavior. The method is validated with a hierarchical control task in a simulated environment with human subject experiments. Our method also provides insight into the design of the learning strategy for HRC. The results show that the robot should learn the task first to achieve higher team performance and learn the human first to achieve higher learning efficiency.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here