OpenAI previews new AI voice technology amid deepfake concerns

Maker of ChatGPT On Friday, OpenAI released a first peek at a brand-new artificial intelligence (AI) tool that can replicate human voices and produce “natural-sounding speech.”

According to a blog post by OpenAI, the Voice Engine technology only needs “a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker. “The AI startup demonstrated how a voice engine can translate text, help with reading, and give nonverbal people or those with speech disorders a voice. OpenAI did admit that there could be “serious risks, which are especially top of mind in an election year,” associated with the technology.

The company first developed Voice Engine in late 2022, and late last year it began a “small group of trusted partners” secret testing phase.

It was highlighted by OpenAI that these partners have accepted its usage regulations, which forbid impersonating someone without their permission and require the actual speaker’s express and informed agreement. The partners are also required to reveal that the voices are artificial intelligence (AI)-generated.

The company claims that Voice Engine watermarks all its audio to aid in identifying its source. According to OpenAI, voice authentication to “verify that the original speaker is knowingly adding their voice to the service” and a “no-go voice list” to stop the development of sounds that sound like well-known people should be included in any widely used version of this technology.

Additionally, the business advised institutions to gradually stop using voice-based authentication to gain access to bank accounts and other private data. Furthermore, it still seemed unsure of whether it would eventually make the technology available to a wider audience.

“We hope to start a conversation about how to responsibly use synthetic voices and how society can get used to these new tools,” OpenAI wrote in the blog post. “After these talks and the results of these small tests, we will have a better idea of whether and how to use this technology on a larger scale.”

Concerns about the new voice technology are growing due to the potential for AI-generated deepfakes to spread misinformation related to elections. Earlier this year, a message posing as President Biden told New Hampshire residents not to cast their ballots in the primary in January.

Veteran Democratic operative Steve Kramer subsequently acknowledged making the phony robocalls and claimed he did so to highlight the risks associated with artificial intelligence in politics. Similar to this, a local newsletter in Arizona released a deepfake video of Republican Senate candidate Kari Lake last month to show readers “just how good this technology is getting.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post Yahoo bought Artifact: AI-Powered News App from Instagram Co-Founders
Next post Google is considering charging for its powered AI search feature – Find Out Why