Microsoft Research Asia has just unveiled VASA-1, an experimental AI tool that could transform still images or drawings of people into realistic videos where they appear to talk or sing. Using an existing audio file, this tool can animate your photos with facial expressions, head movements, and perfectly synced lip movements that match the audio’s speech or song.
On the project’s webpage, you can find numerous examples that showcase how lifelike these animations can be. Although some lip and head movements might still look a bit mechanical and not perfectly in sync, the overall effect is convincing enough that it could easily be mistaken for real footage.
There’s a significant potential for misuse, particularly in the creation of deepfake videos, which is something Microsoft’s researchers are quite aware of. Consequently, they have decided against releasing any public demos, APIs, or additional details about the implementation until they can ensure the tool will be used responsibly and in accordance with stringent regulations. They haven’t mentioned specific safeguards to prevent misuse by malicious actors for harmful purposes like creating deepfake pornography or misinformation campaigns.
Despite these concerns, the technology promises several beneficial applications. It could enhance educational equity and improve accessibility for individuals with communication challenges by giving them access to an avatar that can communicate on their behalf. Additionally, this tool could provide companionship and therapeutic support, especially in programmes that offer interactions with AI-powered characters.
VASA-1 was trained using the VoxCeleb2 dataset, which includes over 1 million spoken expressions from 6,112 celebrities extracted from YouTube videos. Interestingly, it works not just on real faces but also on artistic ones. An amusing example is the animation of the Mona Lisa synced with an audio clip of Anne Hathaway’s viral rendition of Lil Wayne’s “Paparazzi,” which is quite delightful and worth a watch.