Event-Driven Receding Horizon Control for Distributed Estimation in Network Systems

24 Sep 2020  ·  Shirantha Welikala, Christos G. Cassandras ·

We consider the problem of estimating the states of a distributed network of nodes (targets) through a team of cooperating agents (sensors) persistently visiting the nodes so that an overall measure of estimation error covariance evaluated over a finite period is minimized. We formulate this as a multi-agent persistent monitoring problem where the goal is to control each agent's trajectory defined as a sequence of target visits and the corresponding dwell times spent making observations at each visited target. A distributed on-line agent controller is developed where each agent solves a sequence of receding horizon control problems (RHCPs) in an event-driven manner. A novel objective function is proposed for these RHCPs so as to optimize the effectiveness of this distributed estimation process and its unimodality property is established under some assumptions. Moreover, a machine learning solution is proposed to improve the computational efficiency of this distributed estimation process by exploiting the history of each agent's trajectory. Finally, extensive numerical results are provided indicating significant improvements compared to other state-of-the-art agent controllers.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here