Search Results for author: Nianli Peng

Found 2 papers, 1 papers with code

Nonlinear Multi-objective Reinforcement Learning with Provable Guarantees

no code implementations5 Nov 2023 Nianli Peng, Brandon Fain

We first state a distinct reward-aware version of value iteration that calculates a non-stationary policy that is approximately optimal for a given model of the environment.

Fairness Multi-Objective Reinforcement Learning +1

Welfare and Fairness in Multi-objective Reinforcement Learning

1 code implementation30 Nov 2022 Zimeng Fan, Nianli Peng, Muhang Tian, Brandon Fain

We study fair multi-objective reinforcement learning in which an agent must learn a policy that simultaneously achieves high reward on multiple dimensions of a vector-valued reward.

Fairness Multi-Objective Reinforcement Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.