ChatGPT’s Advanced Voice Mode, known for enabling real-time conversations with a chatbot, might soon include visual capabilities. Code uncovered in the latest beta version of the app hints at introducing a “live camera” feature. This discovery in ChatGPT v1.2024.317, as reported by Android Authority, suggests that the rollout of this exciting feature could be just around the corner. However, OpenAI has yet to confirm an official release date.
A glimpse into the feature’s early tests
The idea of ChatGPT having a visual edge has been introduced previously. During the initial alpha testing phase of Advanced Voice Mode in May, OpenAI demonstrated its potential visual capabilities. In one example, the chatbot used a phone’s camera to identify a dog, recognise its ball, and associate the two in the context of playing fetch. This ability to observe, understand, and link objects to real-world scenarios was widely praised by early testers.
Alpha testers were quick to explore the feature’s uses. A notable example came from a user on X (formerly Twitter), Manuel Sainsily, who utilised the camera to ask questions about his new kitten. This interactive capability showcased how the feature could provide fun and practical benefits.
Trying #ChatGPT‘s new Advanced Voice Mode that just got released in Alpha. It feels like face-timing a super knowledgeable friend, which in this case was super helpful — reassuring us with our new kitten. It can answer questions in real-time and use the camera as input too! pic.twitter.com/Xx0HCAc4To
— Manuel Sainsily (@ManuVision) July 30, 2024
When Advanced Voice Mode entered beta testing in September for ChatGPT Plus and Enterprise users, its visual functionality was notably absent. Despite this, the voice feature gained immense popularity for enabling natural, dynamic conversations. According to OpenAI, users could interrupt the chatbot at any moment, and it could even pick up on the speaker’s emotional tone.
What sets it apart from competitors?
We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK
— OpenAI (@OpenAI) July 30, 2024
ChatGPT could have a unique edge over rivals like Google and Meta if the live camera feature is introduced. Google’s conversational AI, Gemini Live, may speak over 40 languages but lacks visual processing capabilities. Similarly, Meta’s Natural Voice Interactions, showcased at the Connect 2024 event in September, cannot use camera inputs. While these systems are competent in their ways, OpenAI’s visual feature could redefine how AI assistants interact with the world.
Desktop users can now enjoy Advanced Voice Mode
In a related update, OpenAI announced that Advanced Voice Mode is now available to paid ChatGPT Plus users on desktop. Previously limited to mobile devices, this update means users can now access this feature directly on their laptops or PCs.
Another Advanced Voice update for you—it’s rolling out now on https://t.co/nYW5KO1aIg on desktop for all paid users.
— OpenAI (@OpenAI) November 19, 2024
So you can easily learn how to say the things you’re doing an entire presentation on. pic.twitter.com/n138fy4QeG
The introduction of the live camera could mark a significant leap forward, combining the ability to see and hear into one seamless AI experience. While the exact timing remains uncertain, the potential impact of this development is already generating excitement among users and industry experts alike.