OpenAI and Google DeepMind’s employees present and past warn of AI risks
Current and former employees of OpenAI have expressed concerns about the risks of unregulated AI, including the spread of false information, the limitation of autonomy, the worsening of inequality, and the potential extinction of humans.
On Tuesday, a group of current and former employees of artificial intelligence (AI) firms like Microsoft-backed OpenAI and Alphabet-owned Google DeepMind expressed their concerns about the risks that the new technology poses.
In an open letter, 11 current and former employees of OpenAI, along with one current and one former employee of Google DeepMind, expressed that the financial pursuits of AI companies make it more challenging to monitor them. It said, “We do not believe that customized structures of corporate governance are enough to change this.”
Additionally, it discusses the risks of unregulated AI, including its potential to disseminate misleading information, restrict the autonomy of AI systems, exacerbate inequality, and potentially cause “human extinction.” Researchers have found examples of image generators from companies like OpenAI and Microsoft making photos with false information about voting, even though those companies have rules against that kind of content.
AI Companies Urged to Enhance Transparency Amid Safety Concerns
The letter said that AI companies have “weak obligations” to tell governments about the strengths and weaknesses of their systems. The letter also asserts that we cannot trust these companies to independently share this information. The open letter is the most recent attempt to raise safety concerns about generative AI technology, which can quickly and inexpensively create text, images, and sounds that resemble human creations.
The group has asked AI companies to make it easy for current and former employees to talk about concerns about risk and not enforce confidentiality agreements that don’t allow criticism.
Separately, Sam Altman‘s company announced on Thursday that it had halted five covert operations that attempted to use its AI models for “deceptive activity” online