Equitable AI for All: How Developers Combat Bias in AI Algorithms

UPDATED: September 17, 2024
PUBLISHED: July 2, 2024
A robot hand holds the scales of justice.

Bias and discrimination are often inadvertently built into the algorithms we rely on. Here’s how some tech developers working toward equitable AI are correcting that.

Data scientist and author Meredith Broussard doesn’t think technology can solve most of our issues. In fact, she’s concerned about the implications of our dependency on technology—namely, that technology has proven itself to be unreliable and, at times, outright biased.

Broussard’s book, More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech, unpacks the myth of technological neutrality and challenges us to question the societal frameworks that marginalize people of color, women and people who have disabilities.

“It’s a good time to be talking about the nuances of artificial intelligence because everybody is aware of AI now in a way that they weren’t in 2018 when my previous book came out,” says Broussard, referring to her first book, Artificial Unintelligence: How Computers Misunderstand the World. “Not only are people talking about it now, but I feel like this is the right moment to add layers to the discussion and talk about how AI might be helping or mostly hurting some of our entrenched social issues.”

Techno-futurists often wax philosophical about what will happen when AI becomes sentient. But technology is affecting our world now, and it’s not always pretty. When developing technology, we must understand the ugly, from bias in AI to the unseen “ghost workers” who train it.

The bias in the machine

I encountered Broussard’s first book in graduate school while I was developing a course of study that looked closely at AI in journalism. I began my research with rose-tinted glasses. I thought AI was going to save the world through our collaborative relationship with technology.

At that point, ChatGPT had not yet been released to the public. But the conversation around bias in AI was already happening. In my first semester, I read Ruha Benjamin’s Race After Technology: Abolitionist Tools for the New Jim Code, which taught me to be skeptical toward machines. Broussard’s current book is named from a quote within that states bias in technology is “more than glitch.”

“This is the idea that automated systems or AI systems discriminate by default,” Broussard says. “So people tend to talk about computers as being neutral or objective or unbiased, and nothing could be further from the truth. What happens in AI systems is that they are trained on data from the world as it is, and then the models reproduce what they see in the data. And this includes all kinds of discrimination and bias.”

Racist robots

AI algorithms have been known to discriminate against people of color, women, transgender individuals and people with disabilities.

Adio Dinika is a research fellow at the Distributed AI Research Institute (DAIR), a nonprofit organization that conducts independently funded AI research. DAIR acknowledges on its About Us page that “AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes, it can be beneficial.”

“Oftentimes we hear people saying things like, ‘I don’t see color,’ which I strongly disagree with,” Dinika says. “It has to be seen because color is the reason why we have the biases that we have today. So when you say I don’t see color, that means you’re glossing over the injustice and the biases that are already there. AI is not… a magical tool, but it relies upon the inherent biases which we have as human beings.”

Explainable fairness and algorithmic auditing

For Cathy O’Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy and founder of O’Neil Risk Consulting and Algorithmic Auditing (ORCAA), bias and discrimination can be broken down into a mathematical formula.

ORCAA uses a framework called Explainable Fairness, which takes the emotion out of deciding whether something is fair or not. So if a hiring algorithm disproportionately selects men for interviews, the auditor using the framework would then rule out legitimate factors such as years of experience or level of education attained. The point is to be able to come back and provide a definitive answer as to whether the algorithm is fair and equitable.

“I have never interviewed somebody who is harmed by an algorithm who wants to know how an algorithm works,” O’Neil says. “They don’t care. They want to know whether it was treating them fairly. And, if not, why not?”

The invisible labor powering AI

While large learning models are trained by large datasets, they also need human help answering some questions like, “Is this pornography?”

A content moderator’s job is to look at user-generated content on websites like Facebook and determine whether it violates the company’s community standards. These moderators are often forced to look at violent or sexually explicit materials (including child sexual abuse materials), which have been found to cause PTSD symptoms in workers.

Part of Dinika’s research at DAIR involves traveling to the Global South, where many of these workers are located due to cheaper wages.

“I’ve seen things that have shocked me, to an extent where I would actually say things that have traumatized me,” Dinika says. “I went to Kenya and spoke to some of these people and saw their payslips, and what I saw there was beyond shocking because you realize that people are working nine hours a day and seeing horrific stuff and not being provided with any form of psychological compensation whatsoever.”

So you want to build equitable AI

O’Neil does not think the creators of a technology can proclaim that the technology is equitable until they’ve identified all the stakeholders. This includes people who don’t use the technology but could still be harmed by it. It also requires a consideration of the legal implications if the technology caused harm that broke the law—for instance, if a hiring algorithm was found to discriminate against autistic applicants.

“I would say you could declare something an ethical version of tech if you’ve made sure that none of the stakeholders are being harmed,” O’Neil says. “But you have to do a real interrogation into what that looks like. Going back to the issue of developing ethical AI, I then think one of the things that we need to do is to make sure that we are not building these systems on the broken backs of exploited workers in the Global South.”

Dinika adds, “If we involve [the people who are affected] in the development of our systems, then we know that they’re able to quickly flick out the problematic issues in our tools. Doing so may help us mitigate them from the point of development rather than when the tool is out there and has already caused harm.”

We can’t code our way out of it

“There’s a lot of focus on making the new, shiny, moonshot thing. But we are in a really interesting period of time where all of the problems that were easy to solve with technology have been solved,” Broussard says. “And so the problems that we’re left with are the really deeply entrenched, complicated, long-standing social problems, and there’s no way to code our way out of that. We need to change the world at the same time that we change our code.” 

Springer-Norris is an AI writer who can barely write a single line of code.

This article originally appeared in the July issue of SUCCESS+ digital magazine. Photo courtesy aniqpixel/Shutterstock.com

Oops!

You’ve reached your limit of free
articles for this month!

Subscribe today and read to your heart’s content!

(plus get access to hundreds of resources designed
to help you excel in life and business)

Just

50¢
per day

!

Unlock a fifth article for free!

Plus, get access to daily inspiration, weekly newsletters and podcasts, and occasional updates from us.

By signing up you are also added to SUCCESS® emails. You can easily unsubscribe at anytime. By clicking above, you agree to our Privacy Policy and Terms of Use.

Register

Get unlimited access to SUCCESS®
(+ a bunch of extras)! Learn more.

Let's Set Your Password

Oops!

The exclusive article you’re trying to view is for subscribers only.

Subscribe today and read to your heart’s content!

(plus get access to hundreds of resources designed
to help you excel in life and business)

Just

50¢
per day

!