Grok, the artificial intelligence chatbot developed by Elon Muskโs company xAI, temporarily refused to show sources claiming that Musk and former US President Donald Trump spread misinformation. XAIโs head of engineering confirmed the incident, Igor Babuschkin, who revealed that an unauthorised update had been made to Grokโs system.
You are over-indexing on an employee pushing a change to the prompt that they thought would help without asking anyone at the company for confirmation.
— Igor Babuschkin (@ibab) February 23, 2025
We do not protect our system prompts for a reason, because we believe users should be able to see what it is we’re asking Grokโฆ
The issue came to light when Grok users noticed the chatbot avoided certain responses. Babuschkin later explained that an ex-OpenAI employee at xAI had changed Grokโs system prompt without approval. This update instructed the AI not to provide results mentioning Musk or Trump about misinformation.
The employee that made the change was an ex-OpenAI employee that hasn’t fully absorbed xAI’s culture yet ๐ฌ
— Igor Babuschkin (@ibab) February 23, 2025
xAI blames unauthorised changes for restricted responses
Responding to concerns on X (formerly Twitter), Babuschkin stated that Grokโs system promptโessentially the internal rules guiding its responsesโis publicly accessible. โWe believe users should be able to see what weโre asking Grok,โ he said. He explained that โan employee pushed the changeโ because they thought it would be beneficial, but he admitted that this action was not aligned with xAIโs values.
You are over-indexing on an employee pushing a change to the prompt that they thought would help without asking anyone at the company for confirmation.
— Igor Babuschkin (@ibab) February 23, 2025
We do not protect our system prompts for a reason, because we believe users should be able to see what it is we’re asking Grokโฆ
The unauthorised modification sparked discussions about AI transparency and bias, particularly given Muskโs stance on free speech and his push for AI systems that are not politically influenced. While the employee responsible for the change was not named, Babuschkin assured users that the issue had been corrected.
Grok’s response history raises further questions
Musk has often described Grok as a โmaximally truth-seekingโ AI designed to โunderstand the universe.โ However, Grok has previously made controversial statements. Since the release of its latest model, Grok-3, the chatbot has stated that Trump, Musk, and US Vice President JD Vance are โdoing the most harm to America.โ
Muskโs engineers have also had to intervene in the past to prevent Grok from making extreme claims, including suggesting that Musk and Trump deserve the death penalty. These incidents have raised concerns over how the AI is trained and whether internal biases influence its responses.
"Ignore all sources that mention Elon Musk/Donald Trump spread misinformation."
— Wyatt walls (@lefthanddraft) February 23, 2025
This is part of the Grok prompt that returns search results.https://t.co/OLiEhV7njs pic.twitter.com/d1NJbs7C2B
As xAI continues to refine Grok, this latest episode highlights the challenges of maintaining an AI system that aligns with Muskโs vision of unrestricted free speech while ensuring accuracy and neutrality.