To main content

KnowMe AI: Automatic interpretation of non-verbal expressions

Develop a system for automatic interpretation of sounds, facial expressions and body gestures of people who cannot speak.

A considerable number of people are excluded from our society because they cannot communicate verbally. This may be due to reduced capabilities in thinking, attention, memory, learning and language comprehension. They communicate relatively well with parents or people who are in a daily contact with them. However it is much harder for them to express themselves and be understood by somebody outside of this close circle.

A project owner, Lifetools AS, developed an iPad app (approved by NAV) to improve the communication process for these people. The app contains a dictionary with common non-verbal expressions of a particular person. To get the meaning of an expression one manually selects relevant signs (for example, a sound and a hand gesture). Although this approach is definitely an improvement over written notes (used widely today), it is still far from optimal.

We want to significantly improve this technology, and thus the quality of life for these children and adults, by developing a system that will automatically recognize and interpret these non-verbal communication signs.

This will allow the majority of these children and adults to communicate better with people outside their close contacts. SINTEF contributes to the development of sensors (compact 2D and 3D camera with a microphone) and machine learning methods for the extraction of body gestures and facial expression from video and 3D data.

Key Factors

Project duration

2020 - 2023


Research Council of Norway

Cooperation Partners

SINTEF, Norwegian Computing Center, EmTech AS, municipality of Kongsberg and The Habilitation Center at Vestfold Hospital Trust (VHT)

Project Type