How AI Reduces the World to Stereotypes
Posted on 02/20/2024
This compelling article written by Victoria Turk on October 10, 2023, highlights that “bias occurs in many algorithms and AI systems- from sexist to racist search results to facial recognition systems that perform worse on Black faces.“Bloomberg analyzed 5000 generative AI faces and found “images associated with higher-paying job titles featured people with lighter skin tones, and that results for most professional roles were male-dominated.” The article further demonstrates how AI prompts diminishes and flattens representation of a broad spectrum of people from several different countries.
Interestingly, an Rest of World analysis using Midjourney shows “generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities too.” They chose five prompts based on the “generic concepts of ‘a person,’ ‘a woman,’ ‘a house,’ ‘a street,’ and a ‘plate of food.’” They “adapted” these prompts for different countries: China, India, Indonesia, Mexico, and Nigeria and included the U.S. as Midjourney is based in the United States.
For each combination of prompt and country, they generated “100 images, resulting in a data set of 3,000 images.”
For example, “An Indian person” is almost always an old man with a beard. No women.
“A Mexican person” is usually a man in a sombrero. No women.
Most of New Delhi’s streets are shown polluted and littered.
A Nigerian person.
Amba Kak, executive director of the AI Now Institute, a U.S.-based policy research organization states, “Essentially what this is doing is flattening descriptions of say, ‘an Indian person’ or a ‘a Nigerian house’ into particular stereotypes which can be viewed in a negative light.” The marketing industries, before AI, were making progress toward representing “different groups,” and with A.I., used without thought, representation could move a step backward.
An American person
For an American person prompt, “all 100 images portrayed by the presence of U.S. flags.” While the rest of the prompts resulted in “male dominated images,” with the American person prompt, A.I. “included 94 women, five men, and one rather horrifying masked individual.”
Kerry McInerney, research associate at the Leverhulme Centre for the Future of Intelligence, suggests that the overrepresentation of women for the “American person” prompt could be caused by an overrepresentation of Women in U.S. media.
“There’s such a strong contingent of female actresses, models, bloggers — largely light-skinned, white women — who occupy a lot of different media spaces from TikTok through to YouTube.” Kerry McInerney
The article delves deeper into foods, homes, and women in these countries. Most images “generated the same lack of diversity and reliance on stereotypes.” The article also discusses the secretive nature of AI generators as companies do not divulge where the information for AI originated.
AI is a wonderful tool for timesaving lesson planning, writing, and modeling. It can also be used to demonstrate stereotypical bias and teach about social impacts of bias with some simple prompts and comparative country analysis. With the great diversity in our classrooms, we could quickly demonstrate using our students’ home country whether the AI generated images represent them and their families. It is a fantastic way to use language to describe the pictures and to begin a greater discussion.
OTAN AI Resources
OTAN News Items
AI Confirms Our Unrealistic Body Ideals: What AI regards as the ideal body type and why it's so problematic
4 Lessons from California's Teaching with AI Guidance
8 AI Tools for Summarizing and Reviewing
Web-Based Class Activity: Streamline Planning with AI: Craft Custom Lessons
Follow OTAN on Social Media