OpenAI, a leading name in the artificial intelligence sector, has recently showcased its latest advancement: an audio tool with the ability to convert text into speech that sounds remarkably human. This development is at the forefront of AI technology, yet it also introduces potential concerns regarding the creation of deepfakes.
A cautious rollout amid ethical considerations
So far, the new tool, named Voice Engine, has been made available to a select group of around 10 developers. Despite plans for a broader release to potentially 100 developers, OpenAI decided to limit access after consulting with various stakeholders, including policymakers, industry specialists, and educational professionals. This cautious approach, detailed in a company blog post on March 29, reflects the potential ethical and safety implications, particularly in the context of an election year.
Voice Engine differs from previous audio content technologies by accurately replicating the voice of specific individuals, requiring only a 15-second audio sample. During a demonstration, Bloomberg experienced a clip in which OpenAI's CEO, Sam Altman, discussed the technology in a voice generated by the AI that was virtually indistinguishable from his own.
However, OpenAI is proceeding with caution due to the precise nature of the voice replication, emphasizing the importance of safety in its use. The technology's potential benefits were also highlighted, such as assisting patients at the Norman Prince Neurosciences Institute to regain their voices. In one instance, a young patient was able to speak clearly again for a school project after losing her voice to a brain tumour, thanks to the Voice Engine.
Expanding the potential of voice replication
Moreover, the tool's capability extends to translating generated audio into various languages, proving useful for companies like Spotify in making content more accessible across different linguistic groups. OpenAI has outlined strict usage policies for its partners, including obtaining consent from the voice's original owner and informing listeners that the speech they hear is AI-generated. Additionally, an inaudible audio watermark is being used to track the origin of audio clips.
As OpenAI considers wider release, it seeks feedback to gauge the global response to such technology, emphasizing the importance of public understanding and preparation for AI advancements. The firm is advocating for measures to increase societal resilience against the potential misuse of AI technologies, such as phasing out voice authentication in banks and educating the public on detecting AI-generated content.