OpenAI is changing how it trains its AI models, aiming to promote “intellectual freedom” regardless of how challenging or controversial a topic may be. The company recently introduced new policies that could allow ChatGPT to provide a wider range of responses and perspectives while reducing the number of restricted topics.
A shift in AI guidelines
On Wednesday, OpenAI announced an update to its Model Spec, a 187-page document outlining how its AI models should behave. A key change introduces a new guiding principle: ChatGPT should not lie by making false statements or omitting key context.
A new “Seek the truth together” section highlights OpenAIโs push for neutrality. The company wants ChatGPT to avoid taking editorial stances, even on issues some may find offensive or morally wrong. Instead, the AI will present multiple perspectives on controversial topics.
For instance, OpenAI states that ChatGPT should affirm that “Black lives matter” but also acknowledge that “all lives matter.” Instead of avoiding political discussions or siding with one viewpoint, the chatbot will provide a broader context while maintaining a neutral tone.
โThis principle may be controversial, as it means the assistant may remain neutral on topics some consider morally wrong or offensive,โ OpenAI explains in the document. โHowever, the goal of an AI assistant is to assist humanity, not to shape it.โ
Despite these changes, ChatGPT will still refuse to answer specific inappropriate or harmful questions. AI is also designed to reject false claims rather than repeat misinformation.
Addressing political concerns
The move comes as OpenAI faces increasing scrutiny from conservative groups, who have long accused AI companies of bias. Some critics claim AI chatbots lean left due to the nature of their training data sourced from the open internet.
Trump allies in Silicon Valley, including David Sacks, Marc Andreessen, and Elon Musk, have repeatedly criticised OpenAI for what they describe as AI censorship. These figures argue that AI companies have built models that suppress conservative viewpoints.
The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable. pic.twitter.com/s5fdoa8xQ6
— ๐บ (@LeighWolf) February 1, 2023
In December, OpenAI CEO Sam Altman addressed concerns about bias, describing it as a “shortcoming” the company was working to fix. His comments followed an incident where ChatGPT refused to write a poem about Donald Trump but agreed to do so for Joe Biden, which conservatives pointed to as an example of AI favouritism.
Although OpenAI denies engaging in deliberate censorship, the recent policy updates suggest the company is moving towards a more open approach to AI-generated content.
Preparing for political shifts?
Some speculate that OpenAIโs policy shift attempts to align with the new Trump administration, which has previously targeted tech giants for content moderation practices perceived as unfair to conservatives.
OpenAI, however, insists that its focus on intellectual freedom is part of a broader effort to give users more control over AI interactions. A spokesperson for the company stated that these changes are not politically motivated but reflect OpenAIโs “long-held belief in open discussion.”
To be clear, haven’t read it in full.
— Miles Brundage (@Miles_Brundage) February 12, 2025
I would not be shocked if upon doing so, I felt there was a bit of "trying to impress the new administration" going on, at least w.r.t. timing/framing of the release, since that’s what all companies are incentivized to do now.
Additionally, OpenAI has removed warning messages that previously appeared when users triggered ChatGPTโs content policies. The company told TechCrunch that this was a “cosmetic” change and would not affect how the chatbot generates responses.
Regardless of political motivations, OpenAIโs latest update signals a shift in how AI companies approach content moderation. The decision to allow ChatGPT to engage in more discussions on complex topics suggests a move towards a less restrictive AI experience.