UK opens office in San Francisco to address AI risk

UK opens office in San Francisco to address AI risk

Read Time:4 Minute, 24 Second

The UK’s AI Safety Institute moves to San Francisco to work with the center of AI development. Their goal is to deal with AI risks and work together internationally, showing how important it is for everyone’s safety to understand and control AI technologies.

The United Kingdom is stepping up its own efforts in the field ahead of the AI safety summit that starts later this week in Seoul, South Korea. The AI Safety Institute, a group in the UK that was founded in November 2023 with the big goal of looking into and fixing risks in AI platforms, announced that it would open a second location in San Francisco.

The goal is to be closer to the area that is currently the hub of AI development. The Bay Area is home to companies like OpenAI, Anthropic, Google, Meta, and others that are building important AI technology. Foundational models are the fundamental components of generative AI services and other applications. It’s interesting that the U.K. is still investing in building a direct presence in the U.S. to deal with the issue, even though the two countries have signed an MOU to work together on AI safety initiatives.

Michelle Donelan, the U.K. secretary of state for science, innovation, and technology, said, “Having people on the ground in San Francisco will give them access to the headquarters of many of these AI companies.” “Some of them already have bases here in the UK, but we think having one there would be very helpful too. That way, they could benefit from an even larger pool of talent and work even more closely with the US.”

Part of the reason is that being closer to the epicenter helps the U.K. understand what is being built and also gives the U.K. more visibility with these firms, which is important because the U.K. sees AI and technology in general as a huge chance for economic growth and investment. And with all the recent trouble at OpenAI involving its Superalignment team, it seems like the perfect time to set up a presence there.

The AI Safety Institute: Challenges and Progress

The AI Safety Institute began in November 2023 and is currently not very big. Today, only 32 people work for the organization. It is like David going up against Goliath in the world of AI technology, since companies that build AI models have billions of dollars riding on them and have financial reasons to get their technologies out there and into the hands of paying users.

This month, the AI Safety Institute released Inspect, its first set of tools for testing the safety of basic AI models. This was one of its most important steps forward.

Donelan talked about that release today as an “effort in phase one.” Not only has it been hard to find good models so far, but engagement is also very much an opt-in and inconsistent process right now. A senior official at a U.K. regulator said that companies are not legally required to have their models checked out at this point, and some companies don’t want to have their models checked out before they come out. This could mean that the horse may have already bolted when risk could be seen.

Donelan said that the AI Safety Institute was still figuring out the best way to work with AI companies to test them. She said that the way we evaluate is a whole new field of science. All of the time, we will make the process better and make it work better.

Donelan said that one goal in Seoul would be to show Inspect to regulators who were meeting at the summit and try to get them to adopt it too. “Now we have a way to evaluate.” That’s why phase two should also be about making AI safe for everyone, she said. In the longer term, Donelan thinks the U.K. will pass more AI laws, but it will wait to do so until it has a better understanding of the risks, which is similar to what Prime Minister Rishi Sunak has said about the subject. “We don’t believe in passing laws before we fully understand them,” she said, adding that the institute’s recent international AI safety report, which was mostly about trying to get a full picture of research done so far, “highlighted that big gaps are missing and that we need to encourage and incentivize more research globally.”

“And in the UK, laws are made over a year.” We wouldn’t have anything to show for starting to write laws when we did instead of planning the AI Safety Summit in November of last year. We’d still be writing laws now, however.

Ian Hogarth, chair of the AI Safety Institute, said, “From the beginning of the Institute, we’ve been clear on how important it is to look at AI safety from an international perspective, share research, and work with other countries to test models and predict the risks of new AI.”

Today is a big day that will help us move this plan even further forward. We’re excited to be expanding our operations to a place brimming with tech talent, adding to the incredible knowledge that our staff in London has brought from the start.

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *

Microsoft Gives Cloud Customers AMD Option for AI Processors Instead of Nvidia Previous post Microsoft Gives Cloud Customers AMD Option for AI Processors Instead of Nvidia
Snapchat Plans to Invest $1.5 Billion Annually in AI Next post Snapchat Plans to Invest $1.5 Billion Annually in AI