To main content

Ethical Risk Assessment of AI in Practice Methodology: Process-oriented Lessons Learnt from the Initial Phase of Collaborative Development with Public and Private Organisations in Norway

Abstract

Artificial Intelligence (AI) and its ethical implications are not new for academia and business. Challenges of embedding principles for ethical AI in practice are obvious and even though the gap between theory and practice is decreasing, it does not meet the urgent need for responsible technology development and deployment. Embedding ethical principles in existing risk assessment practices is a novel, process-oriented approach that can contribute to operationalising AI ethics in organisational practice. This paper elaborates on initial phase of collaborative development of ethical risk assessment of AI methodology, involving private and public organisations in Norway. We reflect upon our experience and present key take-aways in a form of three lessons learnt from embedding a Model-based security risk analysis method (CORAS) and a Story Dialog Method (SDM) in the initial phase of the collaborative methodology development. This study concludes that ethical risk assessment of AI in practice is feasible and explores design issues related to cross-sectoral settings, flexibility of the methodology, and power-relationship.

Category

Article in business/trade/industry journal

Client

  • Research Council of Norway (RCN) / 338170

Language

English

Author(s)

  • Natalia Murashova
  • Diana Saplacan
  • Aida Omerovic
  • Leonora Onarheim Bergsjø

Affiliation

  • Østfold University College
  • University of Oslo
  • Norwegian University of Science and Technology
  • SINTEF Digital / Sustainable Communication Technologies
  • University of Agder

Year

2025

Published in

International Conferences on Advances in Computer-Human Interactions ACHI

ISSN

2308-4138

Publisher

International Academy, Research and Industry Association (IARIA)

Page(s)

33 - 40

View this publication at Cristin