Subscribe to our newsletter and stay informed

Check out our list of top companies

Check out our carefully compiled lists of the most relevant and impactful companies within their fields.

Check out our list of top unicorns

Read and learn about the biggest companies that various countries have produced, how they made it, and what the future looks like for them.

Britain Expands AI Safety Hub to San Francisco

The British government is taking its AI safety efforts global, expanding its testing facility for AI models to USA
May 20, 2024

In a strategic move to bolster its influence in the realm of artificial intelligence, the British government is set to expand its AI Safety Institute to the United States. This initiative aims to enhance international collaboration and solidify the UK’s position as a global leader in AI safety and regulation.

Announced on Monday, the new U.S. branch of the AI Safety Institute will be launched in San Francisco this summer. This expansion mirrors the structure and objectives of its London counterpart, which has been operational since November 2023. The institute, chaired by Songkick founder Ian Hogarth, focuses on evaluating the safety of advanced AI systems. It aims to recruit a dedicated team of technical experts, led by a research director, to spearhead these efforts in the Bay Area.

UK Technology Minister Michelle Donelan hailed the expansion as a demonstration of British leadership in AI. She emphasized the importance of this move in allowing the UK to study AI's risks and potentials from a global perspective, strengthening ties with the U.S., and enabling other nations to benefit from British expertise in AI safety.

"This initiative will enable the UK to tap into the vast pool of tech talent in the Bay Area, engage with major AI labs in both London and San Francisco, and solidify our relationship with the United States to advance AI safety for the public good," the government stated.

San Francisco, the epicenter of AI innovation and home to industry giants like OpenAI, will provide a fertile ground for the institute’s new team. OpenAI, supported by Microsoft, is renowned for developing the viral AI chatbot ChatGPT.

The AI Safety Institute's foundation was laid during the AI Safety Summit held at Bletchley Park, England, in November 2023. This summit marked a significant step towards fostering international cooperation on AI safety. The upcoming AI Seoul Summit in South Korea, scheduled for this week, builds on this momentum, further emphasizing global collaboration in the field.

Since its inception, the institute has made notable progress in evaluating cutting-edge AI models from leading industry players. While several models have successfully navigated cybersecurity challenges and demonstrated advanced knowledge in chemistry and biology, all tested models remain susceptible to "jailbreaks" and have struggled with more complex tasks without human supervision. The specifics of these models remain undisclosed, although the UK government has previously engaged with OpenAI, DeepMind, and Anthropic for such evaluations.

As the UK steps up its AI safety initiatives, it faces scrutiny for its lack of formal AI regulations, especially compared to the European Union's proactive stance. The EU's groundbreaking AI Act, poised to set a global standard for AI legislation, underscores the regulatory gap the UK seeks to address through these strategic expansions.

The establishment of the AI Safety Institute’s U.S. counterpart marks a pivotal advancement in international AI safety efforts, leveraging the synergy between the UK and the U.S. to tackle the challenges and harness the potential of artificial intelligence. This cross-continental initiative is a testament to the UK's commitment to shaping a safe and innovative AI future.

More about: 

Last related articles

chevron-down linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram