Abstract
In the face of current global energy challenges and the growing significance of energy efficiency and carbon neutrality, the optimization of control strategies for industrial energy supply systems has gained importance. Deep Reinforcement Learning (DRL) presents a promising opportunity for control strategy optimization, leading to substantial reductions in operating costs and carbon emissions. The lack of interpretability of such black box models however is hindering broader application in practice.
This paper introduces an Interpretable Machine Learning approach aiming to reconcile advanced optimization with interpretability. Initially, DRL algorithms are deployed on a simulation model of an industrial energy supply system for control strategy optimization. Training and validation data sets of the DRL based control strategy are then used to train Decision Trees. Being intrinsically interpretable, Decision Trees enable representation of conditional logic and the extraction of rules for both local and global interpretability. Through optimization procedures such as depth limitation and input feature selection, the Decision Trees can either operate as controllers for energy supply systems directly or as tools to adapt the conventional rule-based control strategy.
Tested successfully on a white-box model – the conventional rule-based control strategy for an industrial energy supply system – the proposed approach correctly identifies the underlying rules of the strategy. When applied on a black box model – the DRL based control strategy of the exemplary use case – in various experiments this paper showcases promising results as well as remaining challenges of the approach.
Keywords explainable artificial intelligence, interpretable machine learning, intelligent energy, industrial energy supply systems, control strategy optimization, deep reinforcement learning
Copyright ©
Energy Proceedings