Thursday, Apr 04, 2024
Advertisement

The US and the UK sign agreement on AI safety testing: What is the deal?

The move comes as the world is figuring out a way to set guardrails around the fast proliferation of AI systems.

AI, AI safetyFigurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken. (Photo: Reuters)

Following through commitments made at the Bletchley Park AI Safety Summit last year, the United States and the United Kingdom on Monday (April 1) signed an agreement that would see them work together to develop tests for the most advanced artificial intelligence (AI) models.

Both countries will share vital information about the capabilities and risks associated with AI models and systems, according to the agreement, which has taken effect immediately. They will also share fundamental technical research on AI safety and security with each other, and work on aligning their approach towards safely deploying AI systems.

The move comes as the world is figuring out a way to set guardrails around the fast proliferation of AI systems. Although these systems offer opportunities, they pose a significant threat to a number of societal set-ups, from misinformation to election integrity.

The agreement

Advertisement

As part of the partnership, both countries will work to align their scientific approaches and work closely to accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents.

The US and the UK AI Safety Institutes have also laid out plans to build a common approach to AI safety testing and to share their capabilities to ensure these risks can be tackled effectively.

Festive offer

Speaking to The Indian Express, an AI expert said: “They intend to perform at least one joint testing exercise on a publicly accessible model. They also intend to tap into a collective pool of expertise by exploring personnel exchanges between the Institutes”.

As the US and the UK strengthen their partnership on AI safety, they have also committed to develop similar partnerships with other countries to promote AI safety across the globe, according to a press release by the US Department of Commerce.

Advertisement

US seeks inputs on open-source AI models

Since last year, the National Telecommunications and Information Administration (NTIA) in the US has separately started consultation on the risks, benefits and potential policy related to dual-use foundation models with widely available weights — parameters that AI models learn during training and processing which help them make decisions. The development came after the President Joe Biden administration issued an executive order on the safe deployment of AI systems in 2023.

The agency is seeking inputs on the varying levels of openness of AI models; the benefits and risks of making model weights widely available compared to the benefits and risks associated with closed models; innovation, competition, safety, security, trustworthiness, equity, and national security concerns with making AI model weights more or less open; and, the role of the US government in guiding, supporting, or restricting the availability of AI model weights.

Meta, which has open-sourced its Llama model, in its submission to NTIA’s consultation called open source the “foundation” of US innovation. “Continued leadership of this technological revolution – including through support for responsible open source Al domestically and in international fora – will underpin US economic, domestic, foreign policy, international development, and national security interests,” it added.

OpenAI, the maker of ChatGPT, has taken a middle path in its comments. It said that while releasing its flagship AI models via Application Programming Interfaces (APIs) and commercial products like ChatGPT has enabled them to continue studying and mitigating risks that were discovered after initial release, some of them may not have been possible had the weights themselves been released.

Advertisement

“These experiences have convinced us that both open weights releases and API and product-based releases are tools for achieving beneficial AI, and we believe the best American AI ecosystem will include both,” it added.

How the world is grappling with AI regulation

Even as the private industry innovates rapidly, lawmakers around the world are grappling with setting legislative guardrails around AI to curb some of its downsides. Recently, the IT Ministry issued an advisory to generative AI companies deploying “untested” systems in India to seek the government’s permission before doing so. However, after the government’s move was criticised by people from across the world, the government scrapped the advisory and issued a new one which had dropped the mention of seeking government approval.

Last year, the EU reached a deal with member states on its AI Act which includes safeguards on the use of AI within the EU, including clear guardrails on its adoption by law enforcement agencies. Consumers have been empowered to launch complaints against any perceived violations.

The US White House also issued an Executive Order on AI, which is being offered as an elaborate template that could work as a blueprint for every other country looking to regulate AI. Last October, Washington released a blueprint for an AI Bill of Rights – seen as a building block for the subsequent executive order.

Soumyarendra Barik is Special Correspondent with The Indian Express and reports on the intersection of technology, policy and society. With over five years of newsroom experience, he has reported on issues of gig workers’ rights, privacy, India’s prevalent digital divide and a range of other policy interventions that impact big tech companies. He once also tailed a food delivery worker for over 12 hours to quantify the amount of money they make, and the pain they go through while doing so. In his free time, he likes to nerd about watches, Formula 1 and football. ... Read More

First uploaded on: 03-04-2024 at 14:26 IST
Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement
close