OpenAI postpones voice assistant launch due to safety testing
OpenAI said they would not be releasing new voice and emotion recognition features for ChatGPT until later because they needed to do more safety testing. This was due to a controversy over Scarlett Johansson’s likeness and concerns about AI content reliability.
OpenAI said on Tuesday that it would give its ChatGPT chatbot more time to be tested for safety before adding voice and emotion recognition.
Last month, the company showed off the tools for the first time in a demo that got ChatGPT users excited. However, actor Scarlett Johansson threatened to sue the company because she said they stole her voice for one of their AI characters.
In a statement on X, OpenAI said that the new features would not be available to all paying subscribers until late June as planned. The company is now delaying the initial release by one month.
According to the company, all paying users will be able to use the features in the autumn. However, it added that “exact timelines depend on meeting our high safety and reliability bar.”
OpenAI first let ChatGPT talk in one of several fake voices, or “personas,” at the end of the last year of 2018. In May, one of those voices was used in a show to show off GPT-4o, a newer and stronger AI system.
The chatbot could speak in expressive tones, respond to a person’s voice and facial expressions, and have more complex conversations.
One of the voices, which OpenAI named Sky, sounds a lot like Scarlett Johansson’s voice as an AI bot in the 2013 movie “Her,” The story revolves around a lonely man who develops feelings for his AI guide.
It’s not true that OpenAI CEO Sam Altman trained the bot on Johansson’s voice, Altman said. Last month reported that the company hired a different actor to provide training audio.
This was based on internal records, interviews with casting directors, and information from the actor’s agent.
As the biggest tech companies in the world and newcomers like OpenAI try to get ahead in generative AI, some projects have run into problems they didn’t expect.
Last month, Google cut back on how often it shows answers made by AI on top of search results. This was done because the tool made some weird mistakes, like telling people to put glue on their pizza.
In February, the search company took down an AI image generator that was being criticized for making pictures like a female pope. Microsoft changed its own AI chatbot last year because it sometimes gave strange and rude answers.
OpenAI’s Progress with ChatGPT’s Voice Mode Ethics
OpenAI said on Tuesday that it needed more time to improve the new voice version of its chatbot‘s ability to detect and block certain content, but it did not say what that content was.
A lot of AI tools have been criticized for making up facts, spewing racist or sexist content, or showing bias in the things they produce.
Making a chatbot that tries to understand and imitate emotions makes its interactions more complicated and adds new ways for things to go wrong.
According to OpenAI, ChatGPT’s advanced Voice Mode can understand and respond to emotions and nonverbal cues.
This brings us closer to having natural conversations with AI in real-time. “Our goal is to give you these new experiences with care.”