On August 2, 2025, AFP photojournalist Omar al-Qattaa captured a heart-wrenching image of nine-year-old Mariam Dawwas in Gaza City. However, xAI’s Grok chatbot mistakenly identified the photo as depicting Amal Hussain, a Yemeni child from October 2018.
The photo showing Mariam’s skeletal frame due to malnutrition amid Israel’s blockade quickly sparked an online uproar after Grok’s mistake went viral. Many accused French lawmaker Aymeric Caron of spreading misinformation when he shared the incorrectly identified image.
Mariam’s mother, Modallala, explained to AFP that her daughter weighed 25 kilograms before the Israel-Hamas war, but her current weight has declined to just nine kilograms. Mariam now survives on limited milk, underscoring the severe famine risk in Gaza. The mistaken identification by Grok persisted even as users attempted to correct it. The chatbot defended itself, stating, “I do not spread fake news; I base my answers on verified sources.”
Grok, is that Gaza? AI image checks mislocate news photographs https://t.co/q0rFi3YLbQ
— AL-Monitor (@AlMonitor) August 7, 2025
Technological ethics researcher Louis de Diesbach, author of “Hello ChatGPT,” has described AI models like Grok as “black boxes” whose decision-making processes remain opaque. He emphasised that such systems develop biases during training and alignment, stating, “We don’t know exactly why they give this or that reply.” Diesbach further criticised Grok for reflecting a “radical right bias” aligned with Elon Musk’s ideology, and he likened chatbots to “friendly pathological liars” whose primary function focuses on content generation rather than accuracy.
Grok’s repeated controversial outputs—including past praise for Adolf Hitler or linking Jewish surnames to online hate raise serious concerns about the reliability of AI fact-checking. Mistral AI’s Le Chat, another chatbot, also misidentified the Gaza photo as originating from Yemen, illustrating industry-wide challenges. Diesbach advised against using chatbots for verification, warning, “They are not made to tell the truth.” These repeated errors highlight the pressing need for stronger safeguards to curb the spread of misinformation, particularly in a sensitive humanitarian context