Researchers asked AI to show a typical Australian dad: he was white and had an iguana

Generative artificial intelligence (AI) is heralded as a transformative technology, promising innovations in creativity, efficiency, and problem-solving. However, as recent research demonstrates, the technology is not immune to the biases ingrained within the data it has been trained on. An Australian study conducted in mid-2025 has brought to light poignant concerns about how these machine learning tools depict identity, culture, and family in the Australian context — often resorting to reductive and problematic stereotypes.
Unpacking the Study: How AI Sees the ‘Australian Dad’
Researchers from Australian universities tasked several prominent generative AI systems, including OpenAI’s DALL-E 3, Midjourney, and Meta’s Imagine, with visualising what a ‘typical Australian dad’ looks like. The results were revealing and, at times, disturbing: the overwhelming majority of the generated images depicted white men, frequently in casual attire, and in some cases included quirky or inaccurate details, like the inexplicable presence of an iguana as a pet. Aboriginal, Torres Strait Islander, Asian, and other minority backgrounds were rarely, if ever, represented. In contrast, ‘mums’ were frequently rendered as white, middle-class, and performing care roles, reflecting gendered assumptions about parenting.
This is not just a quirk of computer artistry but reflects a much larger issue: the datasets that these AI systems are trained on predominantly come from online sources that themselves over-represent certain demographics and cultural norms while excluding others. This amplifies social biases and creates a feedback loop of exclusion and misrepresentation.
Algorithmic Bias: A Global AI Challenge
The echoes of this Australian study reverberate worldwide as governments, advocacy groups, and tech giants increasingly grapple with AI’s inherent biases. In 2024, the United Nations and European Union both issued new guidelines and regulatory frameworks urging technology companies to audit, document, and transparently address algorithmic discrimination — highlighting real-world harms that can arise from biased AI in policing, hiring, healthcare, and content moderation. The United States, too, has called for an AI Bill of Rights to guarantee fair treatment by automated systems.
Australia’s Human Rights Commission warned in mid-2025 of an acute risk: if unchecked, the use of AI in critical social services, employment decisions, and even creative sectors could “amplify and entrench existing racial and gender disparities.” Advocacy groups have called for government-led audits, standardised fairness tests for algorithms, and more culturally diverse data sets to counteract these effects.
Why Do AI Stereotypes Matter?
At first glance, a comically stereotyped image of an Australian dad – featuring, say, an iguana – might seem trivial, but experts assert that these visualizations reinforce and perpetuate damaging stereotypes. “AI plays an increasingly authoritative role in shaping public perceptions, from media to education and advertising,” explains Dr. Suzanne Srdarov, co-lead author of the study. “When AI consistently omits minority or marginalized identities, it contributes to their social erasure.”
Furthermore, generative AI is now being used to create stock photos, illustrations for publications, and imagery in popular culture. Such representations set the tone for who is seen as ‘normal,’ ‘typical,’ or ‘authentic’ in society. According to recent data, stock photo services are now integrating AI-generated images, with over 35% of new content being synthesized by AI as of July 2025. If unchecked, experts warn these systems could propagate culturally narrow narratives for years to come.
Industry Response and Calls for Change
The companies behind major AI image generators have responded with a mixture of commitments and technical tweaks. OpenAI, for example, launched updates in late 2024 that aim to promote more inclusive depictions by default. Meta has stated it is “actively refining its datasets and output moderation” to reduce biases. Still, independent audits highlight the immense challenge: many biases are deeply rooted in the billions of web images and texts that make up training corpora, making ongoing correction a complex and never-ending task.
Meanwhile, researchers advocate for a multi-pronged approach: increasing the diversity of training data, adding explicit fairness constraints in AI model design, and involving minority groups in both dataset curation and policy decisions. In Australia, policymakers have begun consulting with Indigenous groups, multicultural organisations, and the technology sector to shape a national framework for ethical AI development. Legislation aimed at ensuring algorithmic transparency and redress is expected to be debated in late 2025.
Looking Ahead: Rethinking AI and Representation
The episode underscores a broader theme facing the AI industry in 2025: technological progress must be matched by social responsibility and robust regulation. “If we want AI to empower rather than marginalize, it’s critical that these systems are built and governed with diversity and fairness at their core,” says Professor Tama Leaver, another lead researcher of the study.
Whether in Australia or elsewhere, the question is becoming not just how advanced AI systems can be, but how equitably they serve all of society. As generative AI now helps shape imagery in newsrooms, marketing, education, and the arts, experts caution that oversight, transparency, and inclusivity must keep pace. Otherwise, the future visible in AI’s imagination will be less diverse, more static, and ultimately less reflective of the societies these technologies seek to serve.

