Til hovedinnhold
Norsk English

Explainable AI (XAI)

Machine learning (ML) and artificial intelligence (AI) have revolutionised many sectors in our society and everyday tasks. However, many machine learning and other AI methods are not easily interpretable. In Explainable AI, we focus on designing and developing ways to increase the transparency and obtain more descriptive insights of machine learning methods, without affecting their flexibility and power.

Kontaktperson

Machine learning methods are often very complex, sometimes containing millions of different parameters that are selected and adjusted to maximise the model's performance. Such complexity is the key to successfully tackling problems otherwise impossible for classical approaches, but at the same time it increases the difficulty of providing meaningful, human-useful explanations for the algorithm's behaviour. 

Since ML is now applied to almost any sector of our everyday lives, understanding why and how a certain outcome has been reached is crucial for validating the decision, ensuring compliance with regulations, or discovering unfair or otherwise incorrect results. For applications like risk assessment, medical recommendations, or industrial processes and operational support, the consequences of a wrong decision can be severe. At the same time, without the proper transparency and interpretability, some sectors are reluctant to take advantage of the power of ML methods, hindering their own growth and potential. 

Since explanations must always be tailored to the specific scenario and users, interdisciplinarity is a crucial requirement to develop appropriate and useful Explainable AI method. At SINTEF, our customers find groups and world-class experts covering virtually any engineering sector, ensuring that the correct knowledge is always within reach. We actively collaborate with our customers to produce and apply Explainable AI methods to their specific use-cases and operational needs.

Utforsk fagområdene