A Supervised-Learning based Hour-Ahead Demand Response of a Behavior-based HEMS approximating MILP Optimization

3 Nov 2021  ·  Huy Truong Dinh, Kyu-haeng Lee, Daehee Kim ·

The demand response (DR) program of a traditional HEMS usually intervenes appliances by controlling or scheduling them to achieve multiple objectives such as minimizing energy cost and maximizing user comfort. In this study, instead of intervening appliances and changing resident behavior, our proposed strategy for hour-ahead DR firstly learns appliance use behavior of residents and then silently controls ESS and RES to minimize daily energy cost based on its knowledge. To accomplish the goal, our proposed deep neural networks (DNNs) models approximate MILP optimization by using supervised learning. The datasets for training DNNs are created from optimal outputs of a MILP solver with historical data. After training, at each time slot, these DNNs are used to control ESS and RES with real-time data of the surrounding environment. For comparison, we develop two different strategies named multi-agent reinforcement learning-based strategy, a kind of hour-ahead strategy and forecast-based MILP strategy, a kind of day-ahead strategy. For evaluation and verification, our proposed strategies are applied at three different real-world homes with real-world real-time global horizontal irradiation and real-world real-time prices. Numerical results verify that the proposed MILP-based supervised learning strategy is effective in term of daily energy cost and is the best one among three proposed strategies

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here