US Leads Global AI Safety Summit This November – What’s at Stake
Image Credits: Reuters

US Leads Global AI Safety Summit This November – What’s at Stake?

In November, the US will hold a Global AI Safety Summit to encourage countries to work together on safe and secure AI development, even though there are problems with regulations.

The Biden administration said on Wednesday that it would hold a global safety conference on artificial intelligence. This comes at a time when Congress is still having trouble deciding how to regulate the technology.

In San Francisco, Nov. 20 and 21, Secretary of State Anthony Blinken and Secretary of Commerce Gina Raimondo will hold the first meeting of the International Network of AI Safety Institutes. The goal of the meeting is to “advance global cooperation towards the safe, secure, and trustworthy development of artificial intelligence.”

Australia, Canada, France, the European Union, Japan, Kenya, South Korea, Singapore, Britain, and the United States are all part of the network.

Generative AI, which can make text, pictures, and videos in response to blank prompts, has both excited and scared people because it could replace some jobs, mess up elections, and even take over humans and cause terrible things to happen.

Raimondo announced the creation of the International Network of AI Safety Institutes at the AI Seoul Summit in May. At the summit, countries agreed to put safety, creativity, and inclusion at the top of their AI agendas. The meeting in San Francisco is meant to get people working together on technology faster before the AI Action Summit in Paris in February.

US Collaboration with Global Partners for AI Regulation

Raimondo said the goal is to work together “closely and carefully with our allies and partners who share our values.”

“We want safety, security, and trust to be at the heart of the rules for AI,” she said.

At the meeting in San Francisco, technical experts from each member’s AI safety institute or similar government-backed scientific office will talk about priority work areas and work to improve global collaboration and knowledge sharing on AI safety.

The Commerce Department said last week that it wanted to make sure that advanced AI makers and cloud computing providers follow strict reporting rules. This would help make sure that the technologies are safe and can protect against cyberattacks.

The push for rules comes at a time when Congress has stopped working on AI legislation. In October 2023, President Joe Biden signed an order that makers of AI systems that could threaten U.S. national security, the economy, public health, or safety must give the U.S. government the results of any safety tests they do before making them public.

Leave a Reply

Your email address will not be published. Required fields are marked *

Nvidia Expands AI Reach with New Innovation Center in Tunisia Previous post Nvidia Expands AI Reach with New Innovation Center in Tunisia