Abstract
Artificial intelligence (AI) techniques are being applied across an expanding range of fields and industries. However, many AI models operate as “black boxes”, making it challenging to understand the reasoning behind their outputs. This lack of transparency can lead to bias, discrimination, errors, and defects. Explainable AI (XAI) techniques are a possible solution to this issue. In this work, Reinforcement Learning (RL) is used to solve a delivery route optimization problem, while an intrinsic interpretability XAI method (Rule-based modelling) and two post-hoc analysis methods (Shapley values and Local Interpretable Model-agnostic Explanations (LIME)) are applied to explain and compare predictions of the RL agent.