Differentially Private Reward Functions for Markov Decision Processes

Markov decision processes often seek to maximize a reward function, but onlookers may infer reward functions by observing agents, which can reveal sensitive information. Therefore, in this paper we introduce and compare two methods for privatizing reward functions in policy synthesis for multi-agent Markov decision processes, which generalize Markov decision processes. Reward functions are privatized using differential privacy, a statistical framework for protecting sensitive data. The methods we develop perturb (i) each agent's individual reward function or (ii) the joint reward function shared by all agents. We prove that both of these methods are differentially private and show approach (i) provides better performance. We then develop an algorithm for the numerical computation of the performance loss due to privacy on a case-by-case basis. We also exactly compute the computational complexity of this algorithm in terms of system parameters and show that it is inherently tractable. Numerical simulations are performed on a gridworld example and in waypoint guidance of an autonomous vehicle, and both examples show that privacy induces only negligible performance losses in practice.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here