To main content

Discriminatory bias

Biased algorithms can discriminate against certain groups of people based, among others, on their gender, race, or sexuality. Here is how we approach this problem at SINTEF.

Contact person

Machine learning algorithms are extremely efficient and can be more accurate than their human counterparts or analytical methods. This power comes from the utilization of large amounts of real-world data, but at the same time it poses some problems:

  • Algorithms can mirror and perpetuate biases that are present in society and reflected in data.
  • The final reasoning of algorithms is not necessarily transparent to the programmer and can change each time new data comes in: The decision process becomes a black box.
  • When bias is propagated by algorithms, it can have greater consequences: The efficiency of these programs means that they can be used on a larger scale, and in addition, their decisions are more consistent, potentially propagating the same harmful biases without exceptions.

This can happen across an immense number of applications: In job search and evaluation engines, healthcare tools, credit scoring, translations, face recognition, or even accident handling in self-driving cars. In interactive media it has been observed that bias contributes to polarized views and online “bubbles” in which these views are reinforced and never challenged.

Overall, there are ethical guidelines such as the EU Assessment List for Trustworthy Artificial Intelligence and the Principles for the Ethical Use of AI in the UN System. While they are somewhat challenging to implement in practice, the clearly state that unfair bias should be avoided.

With appropriate techniques, bias in algorithms can be more easily controlled than biases in the real world, which often require a few generations to change.

In SINTEF we practice and expand on these methods to help provide equal chances to the discriminated groups. We work on aspects of data debt that emerges from bias and its management, and how the data debt affects the whole lifecycle of the AI (eco)system (e.g. from capabilities to sustainability to adoption). Designing models that are as transparent as possible and evaluated on their fairness across different groups and group intersections rather than the overall models’ accuracy.

In addition, we work to improve the understanding of bias in machine learning and fair design guidelines, as well as the ethical need for regulations and fair AI.

Explore research areas