OpenAI Sets Up New Team to Ensure Safety and Security
OpenAI, the company that made ChatGPT, said on Tuesday that it had set up a safety and security committee to look over the company’s procedures and safety measures. This comes as worries grow about the use of AI technology that is developing quickly.
It should take the committee 90 days to finish its review. In a blog post, the company said that after that, it will give the full board of the company advice on important safety and security decisions for OpenAI projects and operations.
Leadership Changes and Priorities
It was announced after two high-level leaders quit the company: co-founder Ilya Sutskever and fellow executive Jan Leike. Concerns were raised about the company’s priorities when they both left because they had been focused on making sure humanity had a safe future as AI grew. The team at OpenAI was led by Sutskever and Leike. Their job was to make systems that would reduce the long-term risks of the technology. The group was told to use new science and technology to guide and control AI systems that are a lot smarter than humans. Leike said that OpenAI’s ” safety culture and processes have taken a backseat to shiny products.” when he left the company.
Board Chair Bret Taylor, directors Adam D’Angelo and Nicole Seligman, and CEO Sam Altman are in charge of OpenAI’s new safety and security committee. The committee also has a number of technical and policy leaders from OpenAI. OpenAI stated that it will keep other safety, security, and technical experts on board and talk with them to support this work.
The committee was created at the same time that the company started training what it calls its “next frontier model” for AI. In a blog post, OpenAI said, We are proud to build and release models that are industry-leading in both capabilities and safety. We welcome a robust debate at this important moment.