Hours after the Bondi terrorist assault, whereas many Australians slept, a delusion was generated and laundered via synthetic intelligence.
The only vivid spot from Sunday’s atrocity focusing on Jewish Australians that left 15 useless and 29 injured was the heroics of bystander Ahmed al-Ahmed, who was filmed fearlessly tackling and disarming one of many alleged gunmen.
However within the early hours of Monday morning, an alternate narrative emerged: the story of the Muslim Syrian-born immigrant risking his life to subdue the shooter was improper. The “real” id of the hero was a 43-year-old Australian IT skilled known as “Edward Crabtree”.
The supply of this false declare was what presupposed to be a information web site. Every part recommended this website, “The Daily”, was untrustworthy. The area, www.thedailyaus.world — much like the true Australian information outlet The Every day Aus — was registered on Sunday. It had just one different article. None of its writers existed anyplace else.
The article, too, had all of the hallmarks of being a set-up. It cited pretend quotes from figures like Prime Minister Anthony Albanese, it incorrectly recognized the ex-NSW Police commissioner Karen Webb as nonetheless within the position and at a press convention, and it described occasions that didn’t occur (within the article’s telling, the bystander “pinned the man to the ground until other bystanders rushed in to help restrain him, and police arrived within minutes”).
Two AI textual content detectors utilized by Crikey on the textual content recognized it as probably written by AI.

A screenshot of the hoax web site Grok relied upon initially.
The article was posted on Elon Musk-owned X as early as 9.46pm on Sunday night time, lower than three hours after the assault started.
ASPI analyst Nathan Ruser documented the pretend narrative on X at 12.20am, calling it “AI-generated disinformation falsely claiming the hero was a Sydney-born local called ‘Edward Crabtree’, with an entire AI-generated backstory”.
However by then, it had already unfold throughout the platform, jumped over to different elements of the web, and was used to undermine the true heroics of al-Ahmed.
Then X’s chatbot, Grok, joined in too. At nearly precisely the identical time as Ruser’s publish — 12.25am — @grok first replied about Crabtree to a consumer who had prompted the AI chatbot in a now-deleted publish. (For these not acquainted, Musk’s AI chatbot Grok is built-in into X, permitting any consumer to ask the chatbot a query and to get it to reply to content material on the web site by merely tagging it.)
“Edward Crabtree is a 43-year-old IT professional and senior solutions architect from Sydney, Australia. On Dec. 14, 2025, he heroically disarmed a gunman during a mass shooting at Bondi Beach, tackling the attacker, wrestling away his rifle despite being shot twice, and pinning him until police arrived,” the bot declared.
Over the following hour or so, Grok responded to a number of individuals to declare that Crabtree was the hero accountable. Then, it started to waver, couching its declare that the bystander was both Crabtree or, citing “sources”, that the hero was in truth al-Ahmed. Then, Grok declared the Edward Crabtree story was “AI-generated fake news” and got here from “a newly created website spreading misinformation”.
However it could nonetheless often double down on its lies. As lately as 4am on Monday, hours after Grok had acknowledged it was improper, the bot continued to unfold the lie as if it have been actual. (This wasn’t the one incorrect incontrovertible fact that Grok unfold in regards to the assault. It additionally incorrectly claimed that footage of al-Ahmed was truly repurposed previous footage, once more undemring al-Ahmed’s heroics.)
AI chatbots producing incorrect info is nothing new. Neither is the concept they are often deliberately seeded with disinformation; analysis suggests Russia is actively pumping out propaganda that’s being absorbed by mainstream chatbots.
However what’s completely different — and worrying — about Grok on the Bondi terrorist assault is the near-instant technology of a closed loop of AI misinformation. It seems that AI was used to generate the lie, which was then absorbed by AI, earlier than being immediately regurgitated in a breaking information state of affairs.
What we noticed was an prompt AI ouroburos — the snake consuming its personal tail — which was then picked up and used to beat others. X customers sometimes prompted Grok’s Edward Crabtree reply to contradict viral posts about al-Ahmed’s actions.
In response to at least one viral publish about al-Ahmed, one consumer replied, “He’s not the one they say he’s. He’s an IT professional, his name is Edward Crabtree, not Ahmed.”
Then, they responded to their very own publish, “@grok who’s edward crabtree?”, to immediate the AI bot to inform them that he was the heroic bystander. Grok’s solutions by no means hyperlink to its supply of data, and barely even identify it. Audiences are authoritatively informed one thing as if it have been an indeniable reality.
I don’t suppose Grok’s lies circulated significantly far. It appeared like more often than not, individuals have been calling in Grok to bolster their very own beliefs. Even among the many misinformation swirling across the Bondi terrorist assault, Grok was removed from the largest participant.
Nevertheless it’s price eager about within the context of Musk’s mission to dismantle belief in conventional establishments in favour of the issues that he owns and controls.
Underneath him, Twitter’s blue test mark turned from proof of somebody’s id and significance, to proof on X.com that you’ve got A$13 a month and a promise that your posts could be prioritised.
Grok is supposed to be “maximally truth-seeking”, however Musk has additionally promised to put his finger on the dimensions after Grok gave solutions he didn’t like. His newest enterprise, Grokipedia, is a largely AI-generated on-line encyclopedia that predominantly copies from Wikipedia, with some variations that may be attributed to it utilizing neo-Nazi boards as a supply.
All of this — 280 characters as the bottom unit of reality, information solely present in the event that they’re posted to X, an algorithmic feed feeding its customers content material primarily based on an inscrutable recipe of politics-pushing and engagement-hacking, and an AI chatbot that definitionally doesn’t “know” something however speaks as whether it is indeniable — is pushing us to a world the place the reality is rarely witnessed, solely relayed to us via the warped voices of others.
Zooming out from Musk, that is additionally a glimpse at a brand new type of info warfare with AI as its goal. Unhealthy actors will race to poison a handful of merchandise which are more and more the central supply of stories and knowledge for tons of of hundreds of thousands of individuals. We already see individuals fill information vacuums with misinformation, besides the payoff is having your model of the world laundered via a trusted AI companion. And what higher software to help with this than generative AI, a know-how that may instantly produce content material that’s glorious at imitating reality?
What comes later is predictable: the entire automation of this course of in order that it occurs with out human intervention in any respect. It’s not solely conceivable that somebody might practice AI to detect attention-grabbing breaking information occasions, spin up a false counter-narrative, generate content material selling that view, and seed it out to the world through social media bots — it’s potential proper now.
(I’m not exaggerating. It took me 5 minutes to arrange a industrial AI chatbot to evaluation the world’s information, decide an occasion, create a contradictory account, and generate a whole information web site with a number of articles written about it. Give me a few {dollars} and some further minutes, and I might register a site for the web site and push it out through purchased social media accounts.)
When the world’s incentives are set as much as prioritise engagement and extremes above reality, and to encourage counter-narratives for the portion of the world that defaults to believing the other of what they’re informed, individuals will take benefit. And now, robots will too.
This story first appeared on Crikey. You possibly can learn the unique right here.
