Grok, the artificial intelligence chatbot developed by Elon Musk’s company xAI, temporarily refused to show sources claiming that Musk and former US President Donald Trump spread misinformation. XAI’s head of engineering confirmed the incident, Igor Babuschkin, who revealed that an unauthorised update had been made to Grok’s system.
You are over-indexing on an employee pushing a change to the prompt that they thought would help without asking anyone at the company for confirmation.
— Igor Babuschkin (@ibab) February 23, 2025
We do not protect our system prompts for a reason, because we believe users should be able to see what it is we’re asking Grok…
The issue came to light when Grok users noticed the chatbot avoided certain responses. Babuschkin later explained that an ex-OpenAI employee at xAI had changed Grok’s system prompt without approval. This update instructed the AI not to provide results mentioning Musk or Trump about misinformation.
The employee that made the change was an ex-OpenAI employee that hasn’t fully absorbed xAI’s culture yet 😬
— Igor Babuschkin (@ibab) February 23, 2025
xAI blames unauthorised changes for restricted responses
Responding to concerns on X (formerly Twitter), Babuschkin stated that Grok’s system prompt—essentially the internal rules guiding its responses—is publicly accessible. “We believe users should be able to see what we’re asking Grok,” he said. He explained that “an employee pushed the change” because they thought it would be beneficial, but he admitted that this action was not aligned with xAI’s values.
You are over-indexing on an employee pushing a change to the prompt that they thought would help without asking anyone at the company for confirmation.
— Igor Babuschkin (@ibab) February 23, 2025
We do not protect our system prompts for a reason, because we believe users should be able to see what it is we’re asking Grok…
The unauthorised modification sparked discussions about AI transparency and bias, particularly given Musk’s stance on free speech and his push for AI systems that are not politically influenced. While the employee responsible for the change was not named, Babuschkin assured users that the issue had been corrected.
Grok’s response history raises further questions
Musk has often described Grok as a “maximally truth-seeking” AI designed to “understand the universe.” However, Grok has previously made controversial statements. Since the release of its latest model, Grok-3, the chatbot has stated that Trump, Musk, and US Vice President JD Vance are “doing the most harm to America.”
Musk’s engineers have also had to intervene in the past to prevent Grok from making extreme claims, including suggesting that Musk and Trump deserve the death penalty. These incidents have raised concerns over how the AI is trained and whether internal biases influence its responses.
"Ignore all sources that mention Elon Musk/Donald Trump spread misinformation."
— Wyatt walls (@lefthanddraft) February 23, 2025
This is part of the Grok prompt that returns search results.https://t.co/OLiEhV7njs pic.twitter.com/d1NJbs7C2B
As xAI continues to refine Grok, this latest episode highlights the challenges of maintaining an AI system that aligns with Musk’s vision of unrestricted free speech while ensuring accuracy and neutrality.