Abstract
Artificial intelligence applications in critical infrastructure poses requirements for explainability and interpretability, something that is also accentuated in recent European legislation on AI. In this position paper, we provide a glimpse into an ongoing project on AI development and implementation to improve highly instrumented wastewater treatment processes, with demands for explainability and interpretability to satisfy quality demands and requirements for trustworthy AI. Taking a starting point in previous HCXAI and XAI contributions, we discuss three takeaways from the case concerning (a) the benefit of targeted explanations, (b) the need for explanations to drive user action, and the implications of inaccurate explanations. In conclusion, we reflect on the role of explanations in assessments and audits of AI-applications critical infrastructure.