To main content

Discriminatory bias

Biased algorithms can discriminate against certain groups of people based, among others, on their gender, race, or sexuality. Here is how we approach this problem at SINTEF.

Contact person

Machine learning algorithms are extremely efficient and often more accurate than their human counterparts or analytical methods. This power comes from the utilization of large amounts of real-world data, but at the same time it poses problems:

  1. Algorithms can mirror and perpetuate biases that are present in society and reflected in data.
  2. The final reasoning of algorithms is not necessarily transparent to the programmer and can change each time new data comes in: the decision process becomes a black box.

When bias is propagated by algorithms, it can have greater consequences: the efficiency of these programs means that they can be used on a larger scale, and in addition, their decisions are more consistent, potentially propagating the same harmful biases without exceptions.

This can happen across an immense number of applications: in job search and evaluation engines, healthcare tools, credit scoring, translations, face recognition, or even accident handling in self-driving cars. In interactive media it has been observed that bias contributes to polarized views and online “bubbles” in which these views are reinforced and never challenged.

It remains unclear who holds responsibility for algorithmic decisions, and while there are many guidelines on fair algorithms, they are rarely followed due to the lack of education and incentive.

Read about some examples of AI discrimination here: racism in the US healthcare algorithms, racism in Google photos tagging (for subscribers only), sexism in Amazon’s hiring tool, sexism in Google translations, and anti-LGBT YouTube demonetization.

With appropriate techniques, bias in algorithms can be more easily controlled than biases in the real world, which often require a few generations to change.

In SINTEF we practice and expand on these methods to help provide equal chances to the discriminated groups. We focus on the following aspects:

  1. Good data: Representative and balanced across various backgrounds, for example different genders or race, and their intersections, for example Black females.
  2. Model design: Exclusion of sensitive information and its correlates, thoughtful design of predictive targets, correcting biases with transfer learning.
  3. Algorithmic design: Designing algorithms to be more interpretable and transparent, guided learning processes for fairness.
  4. Fair evaluation: Evaluating models on their fairness across different groups and group intersections rather than the overall models’ accuracy.
  5. Education: Improving the understanding of bias in machine learning and fair design guidelines, as well as the ethical need for regulations and fair AI.

Explore research areas