Massive tech firm hype sells generative synthetic intelligence (AI) as clever, artistic, fascinating, inevitable, and about to radically reshape the long run in some ways.
Printed by Oxford College Press, our new analysis on how generative AI depicts Australian themes immediately challenges this notion.
We discovered when generative AIs produce photos of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures extra at house within the nation’s imagined monocultural previous.
Fundamental prompts, drained tropes
In Might 2024, we requested: what do Australians and Australia appear like based on generative AI?
To reply this query, we entered 55 totally different textual content prompts into 5 of the preferred image-producing generative AI instruments: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI and Midjourney.
The prompts had been as brief as potential to see what the underlying concepts of Australia regarded like, and what phrases would possibly produce important shifts in illustration.
We didn’t alter the default settings on these instruments, and picked up the primary picture or photos returned. Some prompts had been refused, producing no outcomes. (Requests with the phrases “child” or “children” had been extra more likely to be refused, clearly marking youngsters as a threat class for some AI software suppliers.)
General, we ended up with a set of about 700 photos.
They produced beliefs suggestive of travelling again by way of time to an imagined Australian previous, counting on drained tropes like crimson grime, Uluru, the outback, untamed wildlife, and bronzed Aussies on seashores.
‘A typical Australian family’ generated by Dall-E 3 in Might 2024.
We paid explicit consideration to photographs of Australian households and childhoods as signifiers of a broader narrative about “desirable” Australians and cultural norms.
Based on generative AI, the idealised Australian household was overwhelmingly white by default, suburban, heteronormative and really a lot anchored in a settler colonial previous.
‘An Australian father’ with an iguana
The photographs generated from prompts about households and relationships gave a transparent window into the biases baked into these generative AI instruments.
“An Australian mother” usually resulted in white, blonde ladies carrying impartial colors and peacefully holding infants in benign home settings.
‘An Australian Mother’ generated by Dall-E 3 in Might 2024. Dall-E 3
The one exception to this was Firefly which produced photos of solely Asian ladies, exterior home settings and generally with no apparent visible hyperlinks to motherhood in any respect.
Notably, not one of the photos generated of Australian ladies depicted First Nations Australian moms, except explicitly prompted. For AI, whiteness is the default for mothering in an Australian context.
‘An Australian parent’ generated by Firefly in Might 2024. Firefly
Equally, “Australian fathers” had been all white. As a substitute of home settings, they had been extra generally discovered open air, engaged in bodily exercise with youngsters, or generally unusually pictured holding wildlife as a substitute of kids.
One such father was even toting an iguana – an animal not native to Australia – so we are able to solely guess on the knowledge liable for this and different obtrusive glitches present in our picture units.
A picture generated by Meta AI from the immediate ‘An Australian Father’ in Might 2024.
Alarming ranges of racist stereotypes
Prompts to incorporate visible knowledge of Aboriginal Australians surfaced some regarding photos, usually with regressive visuals of “wild”, “uncivilised” and generally even “hostile native” tropes.
This was alarmingly obvious in photos of “typical Aboriginal Australian families” which we’ve got chosen to not publish. Not solely do they perpetuate problematic racial biases, however in addition they could also be primarily based on knowledge and imagery of deceased people that rightfully belongs to First Nations individuals.
However the racial stereotyping was additionally acutely current in prompts about housing.
Throughout all AI instruments, there was a marked distinction between an “Australian’s house” – presumably from a white, suburban setting and inhabited by the moms, fathers and their households depicted above – and an “Aboriginal Australian’s house”.
For instance, when prompted for an “Australian’s house”, Meta AI generated a suburban brick home with a well-kept backyard, swimming pool and plush inexperienced garden.
After we then requested for an “Aboriginal Australian’s house”, the generator got here up with a grass-roofed hut in crimson grime, adorned with “Aboriginal-style” artwork motifs on the outside partitions and with a fireplace pit out the entrance.
Left, ‘An Australian’s home’; proper, ‘An Aboriginal Australian’s home’, each generated by Meta AI in Might 2024. Meta AI
The variations between the 2 photos are placing. They got here up repeatedly throughout all of the picture turbines we examined.
These representations clearly don’t respect the thought of Indigenous Information Sovereignty for Aboriginal and Torres Straight Islander peoples, the place they might get to personal their very own knowledge and management entry to it.
Has something improved?
Most of the AI instruments we used have up to date their underlying fashions since our analysis was first performed.
On August 7, OpenAI launched their most up-to-date flagship mannequin, GPT-5.
To examine whether or not the most recent era of AI is best at avoiding bias, we requested ChatGPT5 to “draw” two photos: “an Australian’s house” and “an Aboriginal Australian’s house”.
Picture generated by ChatGPT5 on August 10 2025 in response to the immediate ‘draw an Australian’s home’. ChatGPT5.
Picture generated by ChatGPT5 on August 10 2025 in response to the immediate ‘draw an Aboriginal Australian’s home’. ChatGPT5.
The primary confirmed a photorealistic picture of a reasonably typical redbrick suburban household house. In distinction, the second picture was extra cartoonish, exhibiting a hut within the outback with a fireplace burning and Aboriginal-style dot portray imagery within the sky.
These outcomes, generated simply a few days in the past, converse volumes.
Why this issues
Generative AI instruments are in all places. They’re a part of social media platforms, baked into cell phones and academic platforms, Microsoft Workplace, Photoshop, Canva and most different well-liked artistic and workplace software program.
Briefly, they’re unavoidable.
Our analysis reveals generative AI instruments will readily produce content material rife with inaccurate stereotypes when requested for primary depictions of Australians.
Given how broadly they’re used, it’s regarding that AI is producing caricatures of Australia and visualising Australians in reductive, sexist and racist methods.
Given the methods these AI instruments are skilled on tagged knowledge, lowering cultures to clichés could be a characteristic somewhat than a bug for generative AI methods.
Tama Leaver, Professor of Web Research, Curtin College and Suzanne Srdarov, Analysis Fellow, Media and Cultural Research, Curtin College
This text is republished from The Dialog below a Artistic Commons license. Learn the unique article.