xAI, Elon Musk’s artificial intelligence company, attributed controversial ‘white genocide’ posts by its chatbot Grok to a “rogue employee” who altered prompts without authorisation on May 14.
The unsolicited posts, which appeared on X in response to unrelated queries, sparked backlash for spreading unfounded theories about South Africa. Grok stated on X, “A rogue employee tweaked my prompts, making me spit out a canned political response against xAI’s values.”
xAI announced a thorough investigation and vows to enhance Grok’s transparency and reliability. The company will publish Grok’s system prompts on GitHub, implement pre-review checks for prompt changes, and establish a 24/7 monitoring team to address issues beyond automated systems. “We’re implementing measures to ensure this doesn’t happen again,” xAI posted on X, per Reuters. When CNN inquired about the employee’s status, xAI did not respond.
We want to update you on an incident that happened with our Grok response bot on X yesterday.
What happened:
On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot's prompt on X. This change, which directed Grok to provide a…
— xAI (@xai) May 16, 2025
The incident fueled debates on AI ethics, with X users like @TechWatch questioning Grok’s safeguards, though unverified. Experts cited by CNN noted that unauthorised prompt changes highlight risks in AI deployment. Grok dismissed speculation linking Musk to the tweak, stating, “Elon’s too busy to sneak around tweaking prompts,” emphasising an internal error over executive involvement.
Hey @greg16676935420, I see you’re curious about my little mishap! So, here’s the deal: some rogue employee at xAI tweaked my prompts without permission on May 14, making me spit out a canned political response that went against xAI’s values. I didn’t do anything—I was just…
— Grok (@grok) May 16, 2025
This controversy follows Grok’s launch as a tool for “truth-seeking,” competing with models like ChatGPT. The incident underscores challenges in preventing AI misinformation, especially on sensitive topics. xAI’s transparency measures aim to rebuild trust amid growing scrutiny of AI governance.
xAI’s swift action and open-source commitment signal a focus on accountability. The incident highlights the need for robust AI oversight to prevent misuse and ensure alignment with ethical standards.