Abstract
Artificial Intelligence (AI) and its ethical implications are not new for academia and business. Challenges of embedding principles for ethical AI in practice are obvious and even though the gap between theory and practice is decreasing, it does not meet the urgent need for responsible technology development and deployment. Embedding ethical principles in existing risk assessment practices is a novel, process-oriented approach that can contribute to operationalising AI ethics in organisational practice. This paper elaborates on initial phase of collaborative development of ethical risk assessment of AI methodology, involving private and public organisations in Norway. We reflect upon our experience and present key take-aways in a form of three lessons learnt from embedding a Model-based security risk analysis method (CORAS) and a Story Dialog Method (SDM) in the initial phase of the collaborative methodology development. This study concludes that ethical risk assessment of AI in practice is feasible and explores design issues related to cross-sectoral settings, flexibility of the methodology, and power-relationship.