AI's image problem: Breaking free from visual stereotypes

AI's image problem: Breaking free from visual stereotypes

Collage with mirrors reflecting diverse human figures, symbolising AI data's human origin and the 'human in the loop' concept.
Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Data is a Mirror of Us / CC-BY 4.0

Ben Wodecki explores how cliches in images and marketing materials for AI products and services have the power to perpetuate biases

Have you ever noticed that news stories and marketing material about artificial intelligence (AI) are typically illustrated with clichéd and misleading images, like overly sexualised humanoid robots and glowing brains?

How about businesses referring to chatbot applications as “thought partners” or “copilots” in a way that makes their contributions to productivity on par with that of a human’s inputs?

To some, simple phrases or images in the media are trivial, even unnoticeable. To a group of researchers, however, these concepts have fundamental impacts on perpetuating biases in emerging technologies.

Capacity chats to some of the pioneers looking to change perceptions and discourse around AI and its accurate depictions in the media, and beyond.

Subscribe today for free

Improving Images


One of these groups is Better Images of AI, a nonprofit organisation spearheaded by We and AI founder Tania Duarte.

The group argue that photo libraries and content platforms lack the variety of images to depict AI correctly in the media, resulting in sci-fi-inspired pictures of the Terminator or anthropomorphised images of robots.

Oftentimes, such images show robots as being white, with such depictions potentially perpetuating the falsity that AI is built by and for white people.

Further, such depictions of AI conserve public mistrust of AI, playing on fears from 80s action movies of robots taking over, instead of the more mundane reality that are simple chatbot systems designed to enhance the odd email.

Better Images of AI set out to change the way AI is portrayed, challenging the media on their uses of lazy, uninspired depictions of AI.

They want AI to represent a wider range of humans, cultures, and socioeconomic backgrounds, utilising a variety of ways to depict different types, uses, sentiments, and implications of AI, not just humanoid robots.

The nonprofit has collated an ever-growing library of images its team of researchers and thinkers believe better depict AI, which have gone on to feature in Time Magazine and The Washington Post. They even work with engineers and designers from the BBC’s R&D team as well as other organisations to help expand their message.

On the left, there is an image of a tree on a snowy, rocky landscape, with light shining in from the background. On the right, there is the same landscape, but it is now reproduced as grey and silver cubes in the shape of a tree and technical terrain, backgrounded by the light.
An example visual from Better Images of AI's library - this one is for generative image models and was created by artist Linus Zoll as part of the Visualising AI project launched by Google DeepMind | Linus Zoll & Google DeepMind / Better Images of AI / CC-BY 4.0

Speaking to Capacity, Duarte said some larger media outlets are now spending more time on AI images carefully selecting the images they’re using to avoid falling into the easy trope traps.

“They've really got the memo and they are, they're using some of the techniques that our images have used, but getting their own commissions to do it, and that's great from our Projects perspective.

“We certainly see some really good images now, in a way that was incredibly rare when we first started.”

While image use at major publications is improving, the situation is worsening with content creators at a smaller scale.

Duarte explained that as wider interest in AI has increased, so too has the number of independent YouTube channels and content creators posting their own content — often featuring AI-generated images that go against everything the group stands for.

“The YouTubers and blogs are using AI image generators and what’s happening is all the tropes that existed are coming out, and they’re making them worse. They’re more varied now, you might get an orange robot as well as a white robot so it’s varied.

“But it's also a bit more insidious because people who before might not have even been able to make an image are now coming up with these crazy concepts that tick every bad box. That's the danger, but it also sheds a big light on the issue with bad datasets being used by image generator models.”

Duarte revealed that such images resulted in the creation of an unofficial persona for Gemini, Google’s flagship AI offering. A dearth of official materials on the model from Google led to content creators opting for AI-generated robot likenesses that were so pervasive that even professionals mistake it for official imagery.

Democratising Discourse

Another group looking to change the way we see AI is AIxDESIGN, which runs community-led research projects and produces public resources aimed at democratising AI literacy.

AIxDesign counts over 8,000 artists, designers, creative technologists and researchers looking to address the socio-technical problems in AI.

Ploipailin Flynn, founding organiser and project lead at AIxDESIGN told Capacity that their work in AI is similar to wider societal conversations around more conscious language and social causes, such as real estate firms referring to the master bedroom as the main bedroom.

“There's always this evolution of language, and I think that the work that we're doing now, you will begin to see more be adopted more in the next few years,” Flynn said.

“The conversations that we're having now are like those really deep dive YouTube videos or TikToks that get really nerdy about something trivial like how lighting choices in a movie help emphasise the narrative. These are niche, granular details but practitioners are interested in figuring out the mechanisms behind how things work and in a few years I think the impact will be felt.”

The work being done by AIxDESIGN and Better Images of AI intertwine - the groups collaborate, recently compiling a list of image briefs to make it easier for creatives to design images for their galleries that offer more reflective depictions of AI than can be found in stock photography.

The groups argue that influencing public perception through the co-optation of images and language by tech companies, whether intended or otherwise, can greatly perpetuate harmful tropes.

Take the term “Copilot,” for example. Initially adopted by GitHub in 2021, parent Microsoft then co-opted the term for its productivity chatbot, before other companies began adopting it to describe their AI services, including big names like SAP to small-scale startups like Robin AI.

This portrayal of an AI on a similar, if inaccurate level to human users, is just one example of what groups like AIxDesign and Better Images of AI are trying to challenge: the inaccurate portrayal to play to the crowd.

Flynn likened it to how people portray themselves on LinkedIn, saying: “What success looks like on LinkedIn is very white American male because it was adopted in Silicon Valley first. It's like, very showboat-ey, but to become popular on LinkedIn, you need to adopt that language.”

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.
Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0

This White Male-co-opting of AI is no different and showcases the need for more diverse voices in AI discussions, rather than relying on the same handful of industry figureheads.

Flynn and the team of researchers at AIxDESIGN have been looking to address some of this by exploring different perspectives on AI — such as “Slow AI,” which look often aspects or lines of thinking often left to the wayside by AI developers along their quest to make such systems appeal to as wide an audience as possible.

“There are opportunities in Small AI for things that are traditionally censored in big AI. So any work related to queerness is met with an automatic blanket: no. Same with sensitive, or what we call ‘spicy topics,’ victim advocacy such as childhood sexual abuse or sex worker advocacy, that is just broad blanket, no in the big AI models,” she explained.

“It means that we're excluding these people from using productivity tools that we're touting about on LinkedIn, like, ‘I can write emails 50% faster with this AI tool,’ but we're actually excluding a bunch of people who are doing really hard work from using these tools because it's a big AI model as opposed to a super-focused, small AI model built for them, by them.”

The big AI developers have had to balance marketing their innovative new AI models to grow to meet their shareholders’ expectations with responsible representation of AI capabilities. Largely, they’ve been weighed down in one way of thinking.

Researchers like those at AIxDESIGN and Better Images for AI aren’t going to back down in their attempts to emphasise the importance of socio-technical aspects of AI. They might not be at the forefront, but they’ll continue to fight, to create a more ideal vision for a technology that will impact each of our lives in the coming years.

RELATED STORIES

Black professionals are severely underrepresented in tech

Urgent U-turn needed to inspire women to take up tech qualifications

Breaking barriers: a female engineer’s journey in the data centre industry

Gift this article