OpenAI has just rolled out a new way to save on AI usage costs. It’s called Flex processing, offering lower prices in return for slightly slower performance and occasional access issues. This change is part of OpenAI’s effort to keep up with strong competition from companies like Google, pushing cheaper and faster AI models.
Flexible processing is needed to handle less urgent tasks such as testing models, enriching data, or running background processes. It’s in beta and available for two of OpenAI’s latest reasoning models, o3 and o4-mini.
Half the price for non-urgent use
Flex processing cuts your API costs exactly in half. That means you’ll pay US$5 per million input tokens (about 750,000 words) and US$20 per million output tokens if you use the o3 model. Usually, the standard prices are US$10 and US$40, respectively. For the o4-mini model, the Flex rate drops to US$0.55 per million input tokens and US$2.20 per million output tokens, compared to the regular prices of US$1.10 and US$4.40.
Of course, these savings come with a trade-off. When using Flex processing, you notice slower response times and some delays if resources are temporarily unavailable. But if your work doesn’t need instant results, this option could help you stay within budget while accessing advanced AI tools.
Why Flex is launching now
The timing of this launch isn’t random. As AI development becomes more expensive, companies seek ways to offer affordable tools. Just recently, Google introduced Gemini 2.5 Flash — a budget-friendly model that performs just as well, if not better, than competitors like DeepSeek’s R1. Input tokens also come at a lower cost.
By offering Flex processing, OpenAI makes it easier to run experiments or process large batches of data without paying premium prices. This move is aimed at users who want access to robust AI but don’t need real-time speed.
New ID checks for certain users
In the same update, OpenAI told users that some developers must now complete an ID verification process. This applies to those in tiers 1 through 3, which are based on how much you’ve spent on OpenAI services. If you fall into one of these categories and want to access the o3 model — or specific features like reasoning summaries or streaming API — you’ll need to verify your identity first.
OpenAI says this step helps prevent misuse of its tools and ensures that users follow its policies. It’s another sign that as AI grows more powerful, companies are taking extra care to manage who can use these systems and how.
Whether running a business, managing a project, or experimenting with AI for fun, Flex processing could be a cost-effective choice — especially if your tasks don’t require instant replies. With half the price and more flexibility, it might be just the right fit for your next AI-powered idea.