Steering game dynamics towards desired outcomes

1 Apr 2024  ·  Ilayda Canyakmaz, Iosif Sakos, Wayne Lin, Antonios Varvitsiotis, Georgios Piliouras ·

The dynamic behavior of agents in games, which captures how their strategies evolve over time based on past interactions, can lead to a spectrum of undesirable behaviors, ranging from non-convergence to Nash equilibria to the emergence of limit cycles and chaos. To mitigate the effects of selfish behavior, central planners can use dynamic payments to guide strategic multi-agent systems toward stability and socially optimal outcomes. However, the effectiveness of such interventions critically relies on accurately predicting agents' responses to incentives and dynamically adjusting payments so that the system is guided towards the desired outcomes. These challenges are further amplified in real-time applications where the dynamics are unknown and only scarce data is available. To tackle this challenge, in this work we introduce the SIAR-MPC method, combining the recently introduced Side Information Assisted Regression (SIAR) method for system identification with Model Predictive Control (MPC). SIAR utilizes side-information constraints inherent to game theoretic applications to model agent responses to payments from scarce data, while MPC uses this model to facilitate dynamic payment adjustments. Our experiments demonstrate the efficiency of SIAR-MPC in guiding the system towards socially optimal equilibria, stabilizing chaotic behaviors, and avoiding specified regions of the state space. Comparative analyses in data-scarce settings show SIAR-MPC's superior performance over pairing MPC with Physics Informed Neural Networks (PINNs), a powerful system identification method that finds models satisfying specific constraints.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here