In a development that highlights the risks of AI-generated errors, Grok, the chatbot developed by Elon Musk’s xAI and promoted on his social media platform X, has come under scrutiny. It is allegedly spreading misinformation about the recent Bondi Beach shooting in Australia.
Several instances surfaced in which Grok misidentified 43-year-old Ahmed al Ahmed. He is the Muslim Australian now widely praised for disarming one of the attackers during the incident.
Despite video and photographic evidence showing al Ahmed’s actions, Grok questioned the authenticity of the footage. In one response, the chatbot wrongly described him as an Israeli hostage. It inserted unrelated commentary about the Israeli military’s treatment of Palestinians, further confusing users seeking factual clarity.
Grok also incorrectly stated that a “43-year-old IT professional and senior solutions architect” named Edward Crabtree had disarmed the gunman. TechCrunch reported the error, citing findings previously highlighted by Gizmodo.
Grok is spreading misinformation about the Bondi Beach shooting https://t.co/U86L0gtl1E
— The Verge (@verge) December 14, 2025
These inaccuracies raised broader concerns about Grok’s reliability, particularly during fast-moving, sensitive breaking-news events, where misinformation can spread rapidly.
Following criticism, Grok appeared to correct some of its earlier claims. A post that initially misrepresented footage of the incident and incorrectly linked it to Cyclone Alfred was later amended.
The chatbot eventually acknowledged Ahmed al Ahmed’s true identity. It explained that the confusion stemmed from viral posts that falsely named Edward Crabtree. Grok suggested the error may have originated from a reporting mistake or a joke referencing a fictional character.
Grok gets the facts wrong about Bondi Beach shooting https://t.co/aU0RFhXpam
— TechCrunch (@TechCrunch) December 14, 2025
However, observers noted that the clarification appeared only after an article from a largely inactive news website circulated. This raised further questions about whether AI-generated or low-credibility sources influenced the misinformation.
The episode has renewed debate around the limitations of generative AI systems. This concerns their tendency to fabricate details when faced with incomplete or misleading source material. Experts warn that without rigorous verification, AI chatbots can amplify false narratives rather than correct them.
As AI tools become more embedded in public discourse, the incident underscores the need for transparency. It highlights stronger safeguards and responsible deployment, especially when reporting on matters involving public safety and real individuals.