- India
- International
Following through commitments made at the Bletchley Park AI Safety Summit last year, the United States and the United Kingdom on Monday (April 1) signed an agreement that would see them work together to develop tests for the most advanced artificial intelligence (AI) models.
Both countries will share vital information about the capabilities and risks associated with AI models and systems, according to the agreement, which has taken effect immediately. They will also share fundamental technical research on AI safety and security with each other, and work on aligning their approach towards safely deploying AI systems.
The move comes as the world is figuring out a way to set guardrails around the fast proliferation of AI systems. Although these systems offer opportunities, they pose a significant threat to a number of societal set-ups, from misinformation to election integrity.
The agreement
As part of the partnership, both countries will work to align their scientific approaches and work closely to accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents.
The US and the UK AI Safety Institutes have also laid out plans to build a common approach to AI safety testing and to share their capabilities to ensure these risks can be tackled effectively.
Speaking to The Indian Express, an AI expert said: “They intend to perform at least one joint testing exercise on a publicly accessible model. They also intend to tap into a collective pool of expertise by exploring personnel exchanges between the Institutes”.
As the US and the UK strengthen their partnership on AI safety, they have also committed to develop similar partnerships with other countries to promote AI safety across the globe, according to a press release by the US Department of Commerce.
Since last year, the National Telecommunications and Information Administration (NTIA) in the US has separately started consultation on the risks, benefits and potential policy related to dual-use foundation models with widely available weights — parameters that AI models learn during training and processing which help them make decisions. The development came after the President Joe Biden administration issued an executive order on the safe deployment of AI systems in 2023.
The agency is seeking inputs on the varying levels of openness of AI models; the benefits and risks of making model weights widely available compared to the benefits and risks associated with closed models; innovation, competition, safety, security, trustworthiness, equity, and national security concerns with making AI model weights more or less open; and, the role of the US government in guiding, supporting, or restricting the availability of AI model weights.
Meta, which has open-sourced its Llama model, in its submission to NTIA’s consultation called open source the “foundation” of US innovation. “Continued leadership of this technological revolution – including through support for responsible open source Al domestically and in international fora – will underpin US economic, domestic, foreign policy, international development, and national security interests,” it added.
OpenAI, the maker of ChatGPT, has taken a middle path in its comments. It said that while releasing its flagship AI models via Application Programming Interfaces (APIs) and commercial products like ChatGPT has enabled them to continue studying and mitigating risks that were discovered after initial release, some of them may not have been possible had the weights themselves been released.
“These experiences have convinced us that both open weights releases and API and product-based releases are tools for achieving beneficial AI, and we believe the best American AI ecosystem will include both,” it added.
How the world is grappling with AI regulation
Even as the private industry innovates rapidly, lawmakers around the world are grappling with setting legislative guardrails around AI to curb some of its downsides. Recently, the IT Ministry issued an advisory to generative AI companies deploying “untested” systems in India to seek the government’s permission before doing so. However, after the government’s move was criticised by people from across the world, the government scrapped the advisory and issued a new one which had dropped the mention of seeking government approval.
Last year, the EU reached a deal with member states on its AI Act which includes safeguards on the use of AI within the EU, including clear guardrails on its adoption by law enforcement agencies. Consumers have been empowered to launch complaints against any perceived violations.
The US White House also issued an Executive Order on AI, which is being offered as an elaborate template that could work as a blueprint for every other country looking to regulate AI. Last October, Washington released a blueprint for an AI Bill of Rights – seen as a building block for the subsequent executive order.