How this programmer and poet thinks we should tackle racially biased AI
The research and poetry of Joy Buolamwini shines a light on a major problem in artificial intelligence.
THE FIRST TIME Joy Buolamwini ran into the problem of racial bias in facial recognition technology, she was an undergraduate at the Georgia Institute of Technology trying to teach a robot to play peekaboo. The artificial intelligence system couldn’t recognize Buolamwini’s dark-skinned face, so she borrowed her white roommate to complete the project. She didn’t stress too much about it—after all, in the early 2010s, AI was a fast-developing field, and that type of problem was sure to be fixed soon.
It wasn’t. As a graduate student at the Massachusetts Institute of Technology in 2015, Buolamwini encountered a similar issue. Facial recognition technology once again didn’t detect her features—until she started coding while wearing a white mask. AI, as impressive as it can be, has a long way to go at one simple task: It can fail, disastrously, to read Black faces and bodies. Addressing this, Buolamwini says, will require reimagining how we define successful software, train our algorithms, and decide for whom specific AI programs should be designed.
While studying at MIT, the programmer confirmed that computers’ bias wasn’t limited to the inability to detect darker faces. Through her Gender Shades project, which evaluated AI products’ ability to classify gender, she found that software that designated a person’s gender as male or female based on a photo was much worse at correctly gendering women and darker-skinned people. For example, although an AI developed by IBM correctly identified the gender of 88 percent of images overall, it classified only 67 percent of dark-skinned women as female compared to correctly noting the gender of nearly 100 percent of light-skinned men.
“Our metrics of success themselves are skewed,” Buolamwini says. IBM’s Watson Visual Recognition AI seemed useful for facial recognition, but when skin tone and gender were considered, it quickly became apparent that the “supercomputer” was failing some demographics. The project leaders responded within a day of receiving the Gender Shades study results in 2018 and released a statement detailing how IBM had been working to improve its product, including by updating training data and recognition capabilities and evaluating its newer software for bias. The company improved Watson’s accuracy in identifying dark-skinned women, shrinking the error rate to about 4 percent.
Prejudiced AI-powered identification software has major implications. At least four innocent Black men and one woman have been arrested in the US in recent years after facial recognition technology incorrectly identified them as criminals, mistaking them for other Black people. Housing units that use similar automated systems to let tenants into buildings can leave dark-skinned and female residents stranded outdoors. That’s why Buolamwini, who is also founder and artist-in-chief of the Algorithmic Justice League, which aims to raise public awareness about the impacts of AI and support advocates who prevent and counteract its harms, merges her ethics work with art in a way that humanizes very technical problems. She has mastered both code and words. “Poetry is a way of bringing in more people into these urgent and necessary conversations,” says Buolamwini, who is the author of the book Unmasking AI.
Perhaps Buolamwini’s most famous work is her poem “AI, Ain’t I a Woman?” In an accompanying video, she demonstrates Watson and other AIs misidentifying famous Black women such as Ida B. Wells, Oprah Winfrey, and Michelle Obama as men. “Can machines ever see my queens as I view them?” she asks. “Can machines ever see our grandmothers as we knew them?”
This type of bias has long been recognized as a problem in the burgeoning field of AI. But even if developers knew that their product wasn’t good at recognizing dark-skinned faces, they didn’t necessarily address the problem. They realized fixing it would take great investment—without much institutional support, Buolamwini says. “It turned out more often than not to be a question of priority,” especially with for-profit companies focused on mass appeal.
Hiring more people of diverse races and genders to work in tech can lend perspective, but it can’t solve the problem on its own, Buolamwini adds. Much of the bias derives from data sets required to train computers, which might not include enough information, such as a large pool of images of dark-skinned women. Diverse programmers alone can’t build an unbiased product using a biased data set.
In fact, it’s impossible to fully rid AI of bias because all humans have biases, Buolamwini says, and their beliefs make their way into code. She wants AI developers to be aware of those mindsets and strive to make systems that do not propagate discrimination.
This involves being deliberate about which computer programs to use, and recognizing that specific ones may be needed for different services in different populations. “We have to move away from a universalist approach of building one system to rule them all,” Buolamwini explains. She gave the example of a healthcare AI: A data set trained mainly on male metrics could lead to signs of disease being missed in female patients. But that doesn’t mean the model is useless, as it could still benefit healthcare for one sex. Instead, developers should also consider building a female-specific model.
But even if it were possible to create unbiased algorithms, they could still perpetuate harm. For example, a theoretically flawless facial recognition AI could fuel state surveillance if it were rolled out across the US. (The Transportation Security Administration plans to try voluntary facial recognition checks in place of manual screening in more than 400 airports in the next several years. The new process might become mandatory in the more distant future.) “Accurate systems can be abused,” Buolamwini says. “Sometimes the solution is to not build a tool.”
Read more about life in the age of AI:
- Why AI could be a big problem for the 2024 presidential election
- The logic behind AI chatbots is surprisingly basic
- This AI program could teach you to be better at chess
- The first AI started a 70-year debate
- Will we ever be able to trust health advice from an AI?
Or check out all of our PopSci+ stories.