OpenAI’s Lilian Weng Exits After 7 Years: A Move That’s Shaking the AI World

OpenAI’s Lilian Weng Exits After 7 Years: A Move That’s Shaking the AI World

Lilian Weng, who used to be OpenAI’s VP of Research and Safety, says she is leaving the company after 7 years. Her last day will be November 15.

Lilian Weng, who is also one of OpenAI’s lead safety researchers, said on Friday that she is leaving the company. Before becoming VP of research and safety in August, Weng was in charge of OpenAI’s safety systems team.

Weng wrote on X, “After 7 years at OpenAI, I feel ready to reset and explore something new.” Weng said that her last day would be November 15, but she didn’t say where she would be going next.

Weng wrote in the post, “I made the very hard decision to leave OpenAI.” “When I think about what we’ve done, I’m so proud of everyone on the Safety Systems team, and I’m sure they will continue to do great work.”

Many AI safety researchers, policy researchers, and other executives have departed from OpenAI in the past year. Weng is the latest in a long line of them. Several of them have said that the company puts commercial products ahead of AI safety.

Weng joins Ilya Sutskever and Jan Leike, who were in charge of OpenAI’s now-defunct Superalignment team and worked on ways to control super-smart AI systems. They also left the startup this year to work on AI safety elsewhere.

The first time Weng worked for OpenAI was in 2018 when she joined the company’s robotics team. It took them two years to build a robot hand that could solve a Rubik’s cube, as she says in her LinkedIn post.

Weng also began to focus more on the GPT paradigm, as OpenAI did. In 2021, the researcher moved on to help build the applied AI research team at the startup. Weng received instructions to form a team to develop safety systems for the new company in 2023, following the launch of GPT-4.

The post by Weng says that OpenAI’s safety systems unit now has more than 80 scientists, researchers, and policy experts working for it.

OpenAI’s Safety Concerns

Many people are concerned about OpenAI’s focus on safety as it strives to create increasingly intelligent AI systems. Longtime policy researcher Miles Brundage left the startup in October, announcing that OpenAI was dissolving its AGI readiness team, a move he had suggested.

The New York Times interviewed a former OpenAI researcher named Suchir Balaji on the same day. He said he quit OpenAI because he believed the company’s technology would hurt society more than help it. TechCrunch talks to OpenAI, which says that safety researchers and executives are working on a plan to replace Weng.

“We deeply appreciate Lilian’s contributions to groundbreaking safety research and building strong technical safeguards,” an OpenAI spokesperson said in an email. “We are sure that the Safety Systems team will continue to be an important part of making sure that our systems are safe and reliable for hundreds of millions of people around the world.”

In the past few months, CTO Mira Murati, Chief Research Officer Bob McGrew, and Research Vice President Barret Zoph have also left OpenAI.

In August, the well-known researcher Andrej Karpathy and the company’s co-founder John Schulman said they were also leaving. Some of these people, like Leike and Schulman, quit to work for Anthropic, a company that competes with OpenAI. Some of them have gone on to start their own businesses.

Leave a Reply

Your email address will not be published. Required fields are marked *

Microsoft Launches Magnetic-One: AI System to Transform Business Efficiency Previous post Microsoft Launches Magnetic-One: AI System to Transform Business Efficiency
How OpenAI and AI Chatbots Changed Voter Info: 2M Users Sent Elsewhere Next post How OpenAI and AI Chatbots Changed Voter Info: 2M Users Sent Elsewhere