OpenAI has just unveiled two new AI models to revolutionize the way you work with artificial intelligence. Known as o3 and o4-mini, these models promise better performance and more flexible tools. What makes them especially interesting is their ability to understand and use images in their thinking process.
Introducing OpenAI o3 and o4-mini—our smartest and most capable models to date.
— OpenAI (@OpenAI) April 16, 2025
For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation. pic.twitter.com/rDaqV0x0wE
Powerful reasoning with images
The o3 model is called OpenAI’s most powerful reasoning model so far. It’s designed to be smarter and more capable than previous versions. Meanwhile, the o4 mini is a smaller, faster option that is still impressively effective, mainly because of its size and cost.
What’s special about both models is their ability to reason using images. You can show them sketches, diagrams, or even whiteboard notes, and they can include those visuals in their thought process. They won’t just look at the images—they’ll interact with them by zooming in, rotating, or focusing on specific areas to better understand your questions.
This kind of image reasoning could be helpful in many fields. For example, if you’re a teacher, designer, or engineer, you might want to show the AI a drawing or a plan and ask questions about it. These models can now offer deeper insights because they actually “think” with those images in mind.
More tools available to more users
The new reasoning models will also be able to use all the tools of ChatGPT, including web browsing, image generation, and more. These tools help the AI look up current information or create pictures based on your ideas.
If you’re using ChatGPT with a Plus, Pro, or Team plan, you can try these tools with the o3, o4-mini, and o4-mini-high models starting today. Users of the o3-pro model will get access in a few weeks.
At the same time, older versions like o1, o3-mini, and o3-mini-high will be removed from these paid plans. This change is part of OpenAI’s goal to focus on its newest and most advanced models.
A week of exciting updates from OpenAI
These announcements come just after OpenAI introduced another major update earlier this week. On Monday, the company revealed GPT-4.1, its latest flagship model. This version follows GPT-4o and is expected to be even more capable.
With all these changes, it’s clear that OpenAI is pushing hard to make its tools more helpful, especially regarding reasoning and understanding the world more like a person would.
So, if you’re using AI for work, study, or creativity, these new tools could give you better results and more freedom to explore new ideas—with words and now with images, too.