OpenAI formed team to control 'Superintelligent' AI
Creator: Anadolu | Credit: Anadolu Agency via Getty Images

OpenAI formed team to control ‘Superintelligent’ AI

A member of OpenAI‘s Superalignment team, whose job it is to find ways to control and guide “superintelligent” AI systems, said that the team was promised 20% of the company’s computing power. However requests for a small part of that computing power were often turned down, which kept the team from getting their work done.

This week, a lot of people on the team quit because of problems like this one. Co-lead Jan Leike was one of them. He used to work at OpenAI, where he helped make ChatGPT, GPT-4, and Instruct GPT, which came before ChatGPT.

On Friday morning, Leike told everyone why he was quitting. It had been a while since Leike and OpenAI’s leaders agreed on the company’s main goals, but now they’ve finally lost it, Leike wrote in a series of posts on X. I think we should put in a lot more work to get ready for the next generation of models. We should talk about things like safety, security, monitoring, preparedness, adversarial robustness,  superalignment, privacy, the impact on society, and other related topics.” It’s not easy to solve these issues, and I worry that we’re not on the right track to do so.

In July of last year, we created the OpenAI’s Superalignment Team. Leike and Ilya Sutskever, who also left the company this week, led the team. Over the next four years, it aimed to solve the main technical problems of controlling superintelligent AI. With scientists and engineers from OpenAI’s old alignment division and researchers from other parts of the company, the team was supposed to do research on the safety of both OpenAI and non-OpenAI models and share their work with the AI industry as a whole through programs like a research grant program.

The Superalignment team published a body of safety research and gave grants worth millions of dollars to researchers from other groups. As more and more new products came out, OpenAI’s leaders had less time for the Superalignment team. They had to fight for more up-front investments, which they thought were important for the company’s stated goal of creating superintelligent AI for the good of all people. Leike said “Creating machines that are smarter than people is inherently dangerous”.

It was very distracting when Sutskever got into a fight with OpenAI CEO Sam Altman. Sutskever and OpenAI’s previous board of directors moved quickly to fire Altman at the end of last year because they thought Altman hadn’t been “consistently can did” with the board members. Many of OpenAI’s employees and investors, including Microsoft, put pressure on the company’s leaders to fire Altman. Ultimately, the company reinstated Altman, most of the board resigned, and Sutskever reportedly never returned to work.

The source said that Sutskever was very important to the Superalignment team. He not only did research but also helped connect the team with other parts of OpenAI. He would also be a kind of ambassador for the team, trying to get important OpenAI decision-makers to understand how important their work is.

After Leike and Sutskever left, another OpenAI co-founder, John Schulman, took over the work that the Superalignment team was doing. However, there will no longer be a dedicated team; instead, there will be a loosely connected group of researchers working in different parts of the company. A representative for OpenAI said it meant “integrating the team more deeply.” People worry that this will mean that OpenAI’s work on AI won’t be as focused on safety as it could have been.

Leave a Reply

Your email address will not be published. Required fields are marked *

OpenAI signs data and advertising deal with Reddit Previous post OpenAI signs data and advertising deal with Reddit
Microsoft Gives Cloud Customers AMD Option for AI Processors Instead of Nvidia Next post Microsoft Gives Cloud Customers AMD Option for AI Processors Instead of Nvidia