Homework, research, work, we use AI for everything. Yet we never stop to think – could we be unknowingly be exacerbating the gender or racial gap?

As much as we might think of AI as an infallible, almost superhuman entity, it remains undeniably human. It is a reflection of how society functions, thinks and lives. Therefore, the deep-rooted, centuries-old disparities that have plagued society are also inherently a part of how AI operates, and it actually often serves to magnify these inequalities.

Such biases have a ripple effect, and in and of themselves, act as a self-fulfilling prophecy. The biased data used to train AI systems leads them to reinforce and promote such outdated stereotypes.

As of 2025, advanced AI systems show up to 70% gender and racial bias (Stanford HAI, 2025). This is only made worse by the fact that women engage significantly less with technology than men due to the gender-based discrimination and harassment often faced online. Thus, AI is exposed predominantly to users who might overlook, validate, and promote such bias instead of seeking to correct it.

Moreover, AI models have been observed to repeatedly prefer content created by other AI tools. With AI-generated content constantly on the rise, subsequent data that AI tools update themselves with will also be contaminated with such bias. This creates a dangerous feedback loop wherein bias leads to more bias.

How Does AI Gender Bias Affect Education?

With AI starting to be extensively used in educational spaces, and many schools incorporating it into their system and curricula, AI gender bias poses a serious threat to future society. AI systems that provide examples or content that is inherently biased further solidify gender roles from a young age.

For instance, many AI teaching tools that suggest career paths often link boys with science and technology-driven fields, and girls with caregiving and arts-based jobs, reinforcing traditional gender roles from a very young age. Moreover, AI tutors may also consistently present male scientists or doctors more prominently than female ones, thus subtly shaping children’s understanding of what roles they can aspire to.

What Impact Does AI Bias Have on Media and Public Policy?

This phenomenon also affects politics, public policy and media. AI training data about political contexts often underrepresents women, leading to AI models prioritising male politicians and portraying them more accurately and positively in political analyses. Furthermore, AI used in policy, such as for predicting the impact of new laws, may miss certain gender-specific aspects.

For instance, it may overlook the varying effects of parental leave policies on women and men. This leads to laws that do not serve all citizens fairly and proportionately. Additionally, AI-assisted media coverage, records and transcripts, all of which are becoming more common today, are also at risk of becoming biased and misrepresentative of women. AI-driven content curation shares the same risk. In fact, a popular AI image app was found to generate overly-sexualized images of women, leading to the perpetuation of regressive stereotypes.

Does Gender Bias Show Up in Research, Recruitment, and Marketing?

AI is currently being tested for applications in large-scale research and data collection, and such inherent biases risk making women and their needs and inputs invisible. In fact, health prediction models and diagnosis tools trained predominantly on male data may cause misdiagnoses for women or overlook women’s varying healthcare needs and responses. AI’s underrepresentation of female-specific conditions like endometriosis extends the average diagnosis time by 7–10 years, delaying necessary medical care due to data bias.

Moreover, an AI recruitment tool once used by Amazon was shown to have incorrectly learnt that male candidates were preferable for certain roles, leading to discrimination and unfairness in the hiring process. Additionally, AI also risks causing women to be less visible in recommendation systems for political jobs or advisory roles, ultimately phasing out women’s participation in leadership positions.

Lastly, AI gender bias also has consequences for the marketing industry. In the long run, AI may start responding differently to users based on gender, such as by promoting more ‘feminine’ products to identified female users. This subconsciously instills a tendency to favour products stereotypically associated with women.

AI In Face Recognition Tools

Another fundamental facet of this issue is racial AI bias. AI bias often disproportionately affects people, of colour, especially women. In fact, in 2020, a study found that numerous popular face recognition tools performed significantly worse on women with darker skin. This has also led to increased false arrests and surveillance risk, with these tools’ error rates for certain racial groups being up to 30% as opposed to around 1% for lighter-skinned people.

One such case demonstrating this is that of Nijeer Parks, a Black American man who spent 10 days in custody due to AI falsely identifying him as a suspect in an assault case. Though such tools simplify and enhance processes, they can quickly become enablers of discrimination, disproportionate scrutiny, and exclusion.

What Does AI Racial Bias Mean for Crime and Law?

This subtle yet detrimental bias also affects the crime landscape. Research has found that Large Language Models (LLMs), such as ChatGPT, exhibit covert racism against speakers of certain languages and dialects, particularly African American English (AAE).

Such AI tools often associate AAE speakers with less prestigious jobs and also show bias in hypothetical criminal sentencing. They hand out harsher and more lethal sentences to them as opposed to Standard American English speakers. This seemingly trivial linguistics-based bias may lead to widespread injustice through AI used in decision-making.

What are the Consequences for Healthcare and Communication?

Similar bias is also prevalent in AI healthcare mechanisms, as a study found that AI platforms provided different psychiatric treatment recommendations to people of colour for similar diagnoses. Even in the recruitment context, AI was found to favour and prioritise CVs with white-associated names most of the time.

Lastly, and perhaps most shockingly, AI chatbots were also found to show different levels of empathy to users based on their racial background. This means that conversational AI chatbots may also play a role in subtly perpetuating racial inequalities.

Thus, AI is a double-edged sword – it is a brilliant tool that can perform many functions impressively, yet it can quickly become a tool that divides and discriminates – building a society with disparities and stereotypes embedded into its very foundations.

Yet this doesn’t mean that AI is to be banned or phased out. It simply means that the responsibility falls to us – the responsibility to not blindly trust AI responses. We should vary and fact-check any sources and claims in order to not end up in a bias-fuelled echo chamber.

Awareness is the key to shaping more inclusive societies and more conscious AI use. We need to first educate ourselves on such inherent biases to slowly be able to eliminate them. Furthermore, AI responses are not the absolute truth. AI can and does make mistakes. It is our duty to check and recognize them.

Lastly, change comes from the people – it is up to us to demand more ethical AI policies and regulations, and push for more inclusive data and stringent testing when training AI models.

AI is ultimately a reflection of us – do we want it to mirror our worst prejudices or our best hopes for equality?

Written by

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.