0 0 lang="en-US"> OpenAI says it has started training a new Flagship AI Model
Site icon The AI Wired

OpenAI says it has started training a new Flagship AI Model

OpenAI says it has started training a new Flagship AI Model
Read Time:4 Minute, 0 Second

OpenAI said it was training a new top-level AI model to replace GPT-4. The aim is to create “artificial general intelligence.” At the same time, they set up a Safety and Security Committee to deal with the risks that came with it because of worries about how AI would affect society and changes in the company’s leadership.

OpenAI announced on Tuesday that it has started training a new flagship artificial intelligence model that will replace the GPT-4 technology that powers its popular online chatbot, ChatGPT.

The top artificial intelligence (A.I.) company in the world, a San Francisco start-up, wrote on its blog that it thinks the new model will bring “the next level of capabilities” as it works to create “artificial general intelligence,” or A.G.I., a machine that can do anything the brain can do. The new model would be a base for AI products like chatbots, digital assistants like Apple’s Siri, search engines, and image makers.

OpenAI also said it was creating a new Safety and Security Committee to figure out how to deal with the risks that the new model and other technologies will pose in the future. “We are proud to make and release models that are the best in their classes in terms of both capabilities and safety,” the company said. ” At this important time, we also want a strong debate.”

OpenAI wants to make AI technology better than its competitors while also calming down critics who say the technology is getting more and more dangerous, spreading false information, taking jobs, and even putting people in danger. Experts don’t agree on when tech companies will reach artificial general intelligence, but for more than a decade, companies like OpenAI, Google, Meta, and Microsoft have been steadily making AI technologies more powerful, with a clear jump happening every two to three years. When OpenAI’s GPT-4 came out in March 2023, it let chatbots and other software apps do things like write emails, make term papers, answer questions, and look at data. OpenAI released a new version of the technology this month, but it’s not yet widely available. It can now generate images and respond to commands and questions in a very conversational manner.

Scarlett Johansson, an actress, said that the updated version, called GPT-4o, had a voice that was “eerily similar to mine” a few days after OpenAI showed it. She stated that she had turned down OpenAI’s CEO Sam Altman‘s offers to use her voice in a product and that she had hired a lawyer to tell OpenAI to stop using the voice. The business said it wasn’t Ms. Johansson’s voice.

Technologies like GPT-4o learn how to do things by looking at a huge amount of digital data, such as photos, videos, sounds, Wikipedia articles, books, and news stories. It can take months or even years to “train” A.I. models digitally.  After training is over, AI companies usually test the technology for a few more months and make it perfect before putting it to use by the public. That could mean that OpenAI’s next model won’t come out for at least nine months.

The company said that while OpenAI trains its new model, its new Safety and Security Committee will work to improve safety rules and procedures for the technology. Mr. Altman is on the committee, along with Bret Taylor, Adam D’Angelo, and Nicole Seligman, all of whom are on the OpenAI board. The business said the new rules might start in late summer or early autumn. The company said earlier this month that Ilya Sutskever, a co-founder and one of the people in charge of safety, was leaving. People expressed concern that OpenAI wasn’t adequately addressing the risks posed by AI.

Dr. Sutskever Leads OpenAI’s Safety Efforts Amid Leadership Changes

In November, Dr. Sutskever and three other board members dismissed Mr. Altman from OpenAI, citing his lack of trust in the company’s plan to develop AI intelligent enough to assist people.

The company gave Mr. Altman back control five days after his supporters pushed for his reinstatement.

Dr. Sutskever led OpenAI’s “Superalignment” team, researching ways to ensure future AI models wouldn’t harm people. Like many others in his field, he was becoming increasingly concerned that A.I. was a threat to people.

Jan Leike and Dr. Sutskever led the Superalignment team. Leike quit the company this month, so it’s not clear what will happen to the team.

OpenAI has added long-term safety research to its larger efforts to ensure its technologies are safe. John Schulman, another co-founder, will be in charge of that work. He was in charge of the team that developed ChatGPT.

The new safety committee will monitor Dr. Schulman’s work closely and guide the company in managing technological risks.

Exit mobile version