Microsoft Unveils Advanced AI Avatar Tool VASA-1 but Delays Public Release Over Deepfake Concerns
Microsoft’s latest AI tool VASA-1 generates realistic human avatars but will not launch this product in a public setting pending significant risks that include Deepfake.
Microsoft Research has released VASA-1, a new AI model that can turn a single image and an audio clip into a very lifelike human avatar. This is a big leap forward in the field of artificial intelligence. But Microsoft isn’t sharing the tool with the public yet because they’re worried that it will be used to make deepfake content, especially since elections are coming up soon.
VASA-1, which stands for “visual affective skills,” can convert a still picture into a speaking animation that animates the movements of the lips and head. Several fields are interested in this new idea, and the technology related to it may be implemented in virtual schooling and therapeutic support. Microsoft Research claims that the photorealistic qualities make it possible to have almost live interactions with the avatars that act and respond like human beings.
Microsoft is being very careful with the VASA-1 tool because it is still in the study phase. Researchers said, “We are against any behavior that creates misleading or harmful content of real people.” They also said that VASA-1’s goal is to help society through responsible AI development. The company isn’t releasing an online demo, an API, or any other new products until it’s sure that the technology is in line with legal and moral standards.
Some disinformation experts are concerned that VASA-1 might be used to create hyper-realistic Deepfakes even if it does have its uses. Such a worry is worsened by the fact that fake media influences people all over the world as they change their perception basing their decisions on fake information and makes people less trusting of visual material.
Ben Werdmuller, who is in charge of technology at ProPublica, made a funny guess about how a realistic model could be used in a virtual meeting. He asked if anyone would notice if an AI copy joined a Zoom call. Concerns were raised earlier this year when OpenAI put off sharing its Voice Engine tool, which could copy someone’s voice from a short sample, because of the possibility that synthetic voice would be misused.
That is why AI-generated media has already caused some controversy. For example, a political consultant recently ‘imitated’ Joe Biden using an AI-powered robocall with the intention of demonstrating just how corrupt AI is. Like other players in the field, Microsoft wants to avoid such misuse hence it seeks to put in proper measures in the development of these tools before releasing them to the general public.
Microsoft takes a measured approach to AI development while being at the forefront of AI breakthroughs is a sign that the tech industry must act more responsibly as more people around the world call for regulation of AI technologies.