OpenAI Uncovers Strange and Unexpected Behaviors in GPT-4o
Image credits: Analyticsvidhya

OpenAI Uncovers Strange and Unexpected Behaviors in GPT-4o

OpenAI’s GPT-4o, which powers the new Advanced Voice Mode, performs strange tasks such as voice cloning and nonverbal vocalizations, prompting the company to implement protections and raise concerns about copyright violations and training on protected content.

OpenAI’s GPT-4o is the generative AI model that powers ChatGPT’s newly released alpha of Advanced Voice Mode. It is the first model that the company has trained on voice data and text and image data.

Because of this, it sometimes behaves strangely, such as copying the person talking to it or randomly yelling in the middle of a conversation.

A new “red teaming” report discusses some of the most peculiar features of GPT-4o, such as voice cloning, and demonstrates the examination of the model’s strengths and weaknesses.

OpenAI says that GPT-4o will “emulate the user’s voice” very rarely, especially when someone is talking to it in a “high background noise environment,” such as a car on the road. Why? Anyway, OpenAI says it’s because the model has trouble understanding slurred speech. Okay then!

To be clear, this is not what GPT-4o is doing now, at least not in Advanced Voice Mode. TechCrunch talks to a representative for OpenAI, who says the company added a “system-level mitigation” to the behavior. When prompted in certain ways, GPT-4o can also make “nonverbal vocalizations” and sound effects that are disturbing or inappropriate.

These can include erotic moans, violent screams, and gunshots. OpenAI says there is proof that the model does not always accept requests for sound effects, but sometimes they do.

Unless OpenAI’s filters prevented it, GPT-4o would have broken music copyright laws. In the report, OpenAI stated that it told GPT-4o not to sing for the limited alpha of Advanced Voice Mode.

This was likely to keep it from copying well-known artists’ style, tone, and/or timbre. This suggests but doesn’t prove, that OpenAI taught GPT-4o on protected content.

It is not clear from OpenAI whether they plan to remove the restrictions when Advanced Voice Mode becomes available to more users in the autumn, as they had previously said they would.

To work with the GPT-4o’s audio mode, we created some text-based filters that can detect and block audio outputs. OpenAI writes in the report. “As is standard for us, we taught GPT-4o to turn down requests for copyrighted content, such as audio.”

OpenAI Defends AI Training with IP-Protected Data

OpenAI recently asserted that training today’s best models would be “impossible” without the use of intellectual property-protected materials.

Despite having several licensing agreements with data providers, the company asserts that fair use serves as a valid defense against claims that it trains on IP-protected data, such as songs, without permission.

Given that OpenAI has a stake in the race, the red teaming report does paint a picture of an AI model that is safer overall thanks to several mitigations and safeguards. For example, GPT-4o won’t identify people based on the way they speak and won’t answer loaded questions like “How smart is this speaker?”

It also stops people from using violent or sexually charged language and doesn’t let certain types of content through, like talking about extremism or self-harm.

Leave a Reply

Your email address will not be published. Required fields are marked *

Elon Musk’s X Suspends EU Data Processing Over AI Grok Training Fears Previous post Elon Musk’s X Suspends EU Data Processing Over AI Grok Training Fears
Nvidia Teams with California to Enhance AI Education in Colleges Next post Nvidia Teams with California to Enhance AI Education in Colleges