During its Spring update, OpenAI announced the release of GPT-4o (“o” for “omni”), its latest flagship generative AI model. This new model offers near-human response times, with audio inputs processed in just 232 milliseconds, similar to human conversational speeds.
GPT-4o is designed to handle queries in various formats, including text, audio, and images, and it generates responses in the same formats. In its demonstration, the model was showcased in voice mode, allowing users to speak directly to ChatGPT. The blog page also features six preset voices—three male and three female—that can read the page aloud.
Advanced recognition and summarisation
The model’s demos highlighted GPT-4o’s ability to recognise and respond to diverse inputs such as screenshots, videos, photos, documents, charts, facial expressions, and handwritten notes. Notably, GPT-4o can create detailed summaries of video presentations and meetings with multiple attendees from voice recordings, posing a potential challenge to transcription tools like Otter.ai.
Developer access and new applications
Developers can now access GPT-4o through the API as a text and vision model. This access is available at half the cost and with five times the rate limits compared to GPT-4 Turbo. Additionally, a new desktop app for Mac is available, with a Windows version on the way.
OpenAI’s release of GPT-4o underscores its dedication to advancing AI technology and making it more accessible. With its impressive speed and versatility, GPT-4o offers vast potential applications for both individuals and businesses.