OpenAI's secrets were stolen by hackers, raising concerns about China's possible involvement
Image credits: Theguardian

OpenAI’s secrets were stolen by hackers, raising concerns about China’s possible involvement

Read Time:6 Minute, 0 Second

Hackers got into OpenAI’s internal messaging systems and stole information about AI designs from an employee forum. Although the breach did not affect the main systems, it sparked security concerns and disagreements within the company regarding AI risks and threats to national security.

Someone broke into OpenAI‘s internal messaging systems at the beginning of last year and stole information about the design of the company’s AI technologies.

Two people who know about the situation say that the hacker got information from an online forum where OpenAI employees talked about the company’s newest technologies.

However, the hacker did not get into the systems where the company stores and develops its AI.

Two people who spoke about sensitive information about the company but asked to remain anonymous said that OpenAI executives told employees about the incident during an all-hands meeting at the company’s San Francisco offices in April 2023.

Two of the executives said they didn’t tell anyone because no customer or partner data was stolen.

The executives didn’t think the hacker was a threat to national security because they thought he or she was an individual with no known ties to a foreign government. The company didn’t tell the FBI or any other police agency.

Some OpenAI employees expressed fear that adversaries from other nations, such as China, could potentially steal Artificial Intelligence (AI) technology, currently primarily utilized for work and research, potentially jeopardizing U.S. national security in the future.

It also made people wonder how seriously OpenAI took security, and it showed that there were disagreements within the company about AI risks.

As a result of the breach, Leopold Aschenbrenner, an OpenAI technical program manager whose job it was to make sure that future AI technologies wouldn’t do a lot of harm, wrote to OpenAI’s board of directors and said that the company wasn’t doing enough to stop the Chinese government and other threats from stealing its secrets.

Mr. Aschenbrenner said that OpenAI fired him this spring for sharing other information outside of work. He also said that his firing was politically motivated.

He discussed the breach in a recent podcast, but he hasn’t previously made the specifics public. He stated that OpenAI’s security was insufficient to prevent outsiders from stealing key secrets.

Someone from OpenAI, Liz Bourgeois, said, “We appreciate the concerns Leopold raised while he was there, and this did not lead to his separation.” She also said, “Whilewe share his commitment to building a safe A.G.I., we disagree with many of the claims he has since made about our work.”

This article discussed the company’s efforts to create a machine capable of performing any task a human can.

It’s not crazy to think that someone from China might have hacked into an American tech company. Back in February, Brad Smith, the president of Microsoft, told lawmakers on Capitol Hill how Chinese hackers used Microsoft’s systems to get into federal government networks.

Federal and California laws, on the other hand, say that OpenAI can’t keep people from working there because of their nationality.

Policy researchers have also stated that preventing foreign workers from working on U.S. projects could stymie AI progress in the United States.

The head of security at OpenAI, Matt Knight, told The New York Times, “We need the smartest people working on this technology.” “There are some risks, and we need to figure them out.”

The Times sued OpenAI and its partner, Microsoft, saying they copied news stories about AI systems without permission.

OpenAI is not the only company utilizing rapidly advancing AI technology to create more powerful systems. Some companies, such as Meta, the owner of Facebook and Instagram, freely share their designs as open-source software.

Today’s AI technologies don’t seem to pose many risks to them, and they think that sharing code will help engineers and researchers across the industry find and fix problems.

Text, still images, and more and more videos can all be used to spread false information online through modern AI systems. Additionally, A.I. systems are eliminating certain jobs.

Before letting people and businesses use their AI apps, companies like OpenAI and its rivals Anthropic and Google add safety features to them. They do this to make sure that people don’t use the apps to spread false information or cause other problems.

But there isn’t a lot of proof that the AI technologies we have now pose a big threat to national security. In the past year, OpenAI, Anthropic, and other researchers have done studies that showed AI was not much more dangerous than search engines.

Anthropic’s president and co-founder, Daniela Amodei, stated that stealing or sharing the company’s newest AI technology wouldn’t pose a significant threat.

“Would it be very bad for society as a whole if someone else owned it?” She told The Times last month that we said, “No, most likely not.” “Could it accelerate something for a bad guy in the future?” Perhaps. It’s just a guess.

Still, scientists and tech executives have been worried for a long time that AI could one day be used to make new bioweapons or help hackers get into government computers. Some people think it could even end the world.

Some businesses, like OpenAI and Anthropic, have already shut down their technical operations. OpenAI just set up a Safety and Security Committee to figure out how to deal with the risks that new technologies will bring.

Paul Nakasone, a former Army general who ran the National Security Agency and Cyber Command, is on the committee. OpenAI has also appointed him to the board of directors.

Mr. Knight said, “We began putting money into security years before ChatGPT.” “We’re on a journey to learn about risks, stay ahead of them, and strengthen ourselves.”

Future Regulatory Challenges and Global AI Leadership

There are also calls from federal and state lawmakers for rules that would stop companies from releasing certain AI technologies and fine them millions of dollars if they hurt people. But experts say that these risks won’t happen for years or even decades.

Companies in China are making their systems that are almost as powerful as the best systems in the United States.

In some ways, China surpassed the US as the country that produced the most top-level AI researchers, with nearly half of the world’s best researchers coming from there.

“It’s not crazy to think that China will soon be ahead of the U.S.,” Clément Delangue, CEO of Hugging Face, a company that hosts many open-source AI projects around the world, said.

Some scientists and people in charge of national security say that the math algorithms that power A.I. systems might become dangerous in the future, even though they are not dangerous now. They want A.I. labs to be more closely regulated.

“Even if the worst-case scenarios aren’t very likely if they have a big impact, then we must take them seriously,” Susan Rice said last month at an event in Silicon Valley.

Rice was Vice President Joe Biden’s domestic policy adviser and President Barack Obama’s national security adviser. “I don’t believe it’s science fiction, even though a lot of people say it is.”
 

 

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *

YouTube makes it easier to report and remove AI deepfakes Previous post YouTube makes it easier to report and remove AI deepfakes