New research reveals that even the most “fair” algorithms can’t escape the influence of human judgment, raising tough questions about who really controls the outcomes in an AI-driven world.

AI is being used more and more in our lives. To use it fairly, we need to understand how humans and AI interact. While AI development offers many benefits, using it fairly is challenging. AI is now used in important areas like hiring and loans.

We think humans overseeing AI decisions will prevent discrimination, but EU Policy Lab research, supported by the European Centre for Algorithmic Transparency (ECAT), shows it’s more complicated.

Fairness in AI – trickier than we thought

Their study found that people often blindly follow AI recommendations, even when they’re unfair.

Experiments with HR and banking professionals showed that AI-assisted decisions about hiring and loans were still influenced by human biases, even when the AI was designed to be neutral.

Even “fair” AI couldn’t completely remove these biases. Interviews and workshops showed that professionals often cared more about their company’s goals than being fair. This shows we need clearer rules about when to ignore AI recommendations.

Toward systemic fairness

The EU Policy Lab’s research highlights the need to move beyond just having individuals oversee AI and instead adopt a system-wide approach that addresses both human and algorithmic biases. To effectively minimize discrimination, action is needed on multiple levels.

Technical measures should ensure that AI systems are designed to be fair and are regularly updated to avoid potential errors. Strategic organizational interventions can create a culture that values fairness, including training employees on how to manage AI tools.

Political action is also crucial – establishing clear guidelines for human-AI collaboration will help better manage the risk of discrimination.

The challenge of human and AI decision-making

The big issue is how humans and AI make decisions together. Article 14 of the EU AI Act says that human oversight should stop problems from badly programmed systems. But, really, people often ignore what AI suggests or use their own biases, which messes up the idea of fair decisions.

A study with 1,400 professionals from Germany and Italy showed that human supervisors tended to dismiss “fair” AI recommendations and go with their own gut feeling, which is often biased.

During interviews and workshops, people admitted that their decisions were based on many things, even unconscious biases. This shows that we need to understand and deal with these things.

Giving decision-makers the power to make the right choices

To really cut down on bias in AI-helped decisions, it’s super important to give decision-makers the right tools and rules. We need to be crystal clear about when it’s okay to ignore what the AI suggests, and we need to keep an eye on how things are going.

If we give decision-makers info on how they’re doing and where they might be going wrong, it can help them make better choices. This pushes them to think about their decisions, which leads to fairer and more balanced AI-supported processes.

The Artificial Intelligence Act – a game-changer

The EU AI Act, which was adopted last year, sets up rules for artificial intelligence and is seen as a global standard.

What the EU Policy Lab found is really important for making future rules and guidelines – it’s not just about following the regulations but also about making them work in real life.

The research shows that making sure AI is fair takes constant work and a flexible approach. It’s up to both the people who make the systems and the people who use them. If we don’t take a broad approach – looking at social, technical, and political stuff—it’s going to be tough to make AI fair.

Key points from the research

The EU Policy Lab’s research used a mix of methods to show how complicated fairness in AI really is. To build fairer and more open AI systems, we need to understand how humans and AI make decisions, and how they affect each other.

Ultimately, achieving justice requires more than just eliminating bias in algorithms. It also demands the careful selection of training and validation data to prevent the transfer of human prejudice or intentional disinformation into AI systems. In addition, a shift in human attitudes and organizational approaches is equally essential. Using AI to make decisions gives us a chance to think about our own biases and how we make choices, both personally and as a society.

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.