The authenticity of AI-generated content has come under scrutiny following reports that the latest version of ChatGPT, known as GPT-5.2, cited material from Grokipedia, an AI-generated online encyclopedia launched in 2023.
Researchers and journalists say the issue matters because users increasingly depend on AI tools for information, research, and analysis.
A report by The Guardian found that the latest version of ChatGPT, known as GPT-5.2, cited Grokipedia in several test responses.
The newspaper tested more than a dozen prompts. ChatGPT referenced Grokipedia nine times. Some questions involved sensitive political and historical topics, including Iran’s political system and Holocaust denial claims.
Why Grokipedia Raises Red Flags
Grokipedia presents itself as an alternative to Wikipedia. Unlike Wikipedia, it relies entirely on AI to generate and update content. The platform is linked to projects associated with Elon Musk.
Experts warn that AI-only knowledge sources can introduce bias. They also note that errors may go unnoticed without human editorial oversight. ChatGPT did not cite Grokipedia when answering widely debated topics. These included the January 6 Capitol attack and HIV/AIDS misinformation.
However, Grokipedia appeared more often in responses to obscure questions. Some answers made strong claims beyond established public evidence, including alleged links between Iranian firms and senior leadership offices.
#FPTech | ChatGPT has begun citing Elon Musk’s Grokipedia responses, a The Guardian investigation found, highlighting how AI chatbots increasingly cross-reference each other. While safeguards block citations on misinformation topics, Grokipedia surfaced in answers to obscure…
— Firstpost (@firstpost) January 26, 2026
The concern does not stop with one platform. Other large language models, including Claude by Anthropic, have also cited Grokipedia in some of their outputs.
This pattern has intensified calls for better transparency across the AI industry. OpenAI says its models draw from licensed data, human-created material, and publicly available sources. The company adds that safety filters aim to limit harmful or misleading content.
Read: Amazon Reinvents Alexa With Memory Features to Take On ChatGPT
Despite these measures, experts stress that unclear sourcing can still affect trust. AI researchers urge developers to strengthen source evaluation. They warn that unchecked AI references may mislead users and reinforce misinformation.
As AI tools shape public discourse, experts say transparency and accountability must remain a priority.