US Commerce Department Signs AI Model Review Agreements with Google, Microsoft, and xAI
The US Department of Commerce announced agreements with Microsoft, Google DeepMind, and xAI to review early versions of new AI models for national security risks before public release. The pacts build on similar deals from the Biden era and aim to evaluate capabilities in areas like cybersecurity and biosecurity. CAISI has already conducted over 40 such evaluations.
G. Edward Johnson / Wikimedia (CC BY 4.0)The US government has reached agreements with Google DeepMind, Microsoft, and xAI to review early versions of their new AI models before public release, focusing on national security risks. The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, announced the deals on Tuesday.
CAISI stated that these collaborations aim to scale federal efforts on AI risks, amid ongoing debates about balancing innovation and security.
Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications, said Chris Fall, CAISI director. CAISI facilitates collaboration between the tech industry and the federal government in developing standards and assessing risks for commercial AI systems.
The agreements emphasize identifying national security risks tied to cybersecurity, biosecurity, and chemical weapons.
Developers share unreleased AI models with the government that have reduced or removed safety guardrails, enabling thorough evaluation of national security-related capabilities and risks, CAISI stated. OpenAI and Anthropic signed similar deals with the Biden administration two years ago.
Under former President Joe Biden, the institute—then known as the US Artificial Intelligence Safety Institute—focused on developing AI tests, definitions, and voluntary safety standards.
It was led by Biden tech adviser Elizabeth Kelly, who has since joined Anthropic. CAISI has completed more than 40 evaluations, including on unreleased models and cutting-edge models not yet available to the public. Developers hand over versions of models with safety guardrails stripped back for CAISI to probe national security risks.
The agreement allows evaluation of models before deployment and research to assess capabilities and security risks. The deals build on similar agreements from the Biden administration and fulfill a 2025 Trump pledge to collaborate on AI vetting. US President Donald Trump signed executive orders last year that formed the basis of his administration's AI Action Plan, which seeks to streamline AI regulations while prioritizing US leadership in the field.
Microsoft will work with US government scientists to test AI systems in ways that probe unexpected behaviors. Microsoft and the US government will develop shared datasets and workflows for testing Microsoft's models. Microsoft regularly undertakes many types of AI testing on its own, but testing for national security and large-scale public safety risks must be a collaborative endeavor with governments, according to a Microsoft blog post.
Microsoft announced a similar agreement in the UK on Tuesday with the government-backed AI Security Institute, which focuses on safe AI development. Evaluations of AI models from the companies will cover testing, collaborative research, and best practice development related to commercial AI systems.
These expanded industry collaborations help scale work in the public interest at a critical moment, Chris Fall, CAISI director, stated.
Google and xAI did not immediately respond to requests for comment. The new pacts expand on agreements by OpenAI and Anthropic reached during the Biden administration.
Anthropic limited its rollout of Mythos to a few companies. Anthropic initiated Project Glasswing to bring together tech companies to secure the world's most critical software. Anthropic has been embroiled in a dispute with the Pentagon over guardrails on the military's use of its AI tools.
The Pentagon reached agreements last week with seven AI companies to deploy their advanced capabilities on the Defense Department's classified networks. The Pentagon announcement did not include Anthropic. Anthropic is mired in a lawsuit with the US Department of Defense over refusal to drop safety guardrails for government use of its models.
Senior members of Trump's staff met last month with Anthropic CEO Dario Amodei. The New York Times and Wall Street Journal reported on Monday that the Trump administration was mulling a potential executive order to create government oversight for AI tools. The Trump administration characterized the reporting on the executive order as speculation.
The White House is considering a new AI working group that would explore potential oversight and vet models before release, according to CNBC. Leading AI companies will give the Commerce Department early access to new systems, The Washington Post reported. Microsoft, Google, and xAI agreed to share AI models with the White House for security reviews, as stated by the New York Post.
CAISI will conduct pre-deployment evaluations and targeted research to understand capabilities and risks of new tools. The deal comes days after the Pentagon announced an agreement with seven tech giants to use AI in classified systems, according to Al Jazeera. Google, Microsoft, and xAI agreed to voluntarily submit their models for testing through CAISI, The BBC reported.
Key Facts
Story Timeline
6 events- 2026-05-05
CAISI announced agreements with Google DeepMind, Microsoft, and xAI for AI model reviews.
7 sourcesThe Guardian · USA Today · The BBC · CNBC - Last week
Pentagon reached agreements with seven AI companies to deploy capabilities on classified networks, excluding Anthropic.
2 sourcesUSA Today · Al Jazeera - Last month
Senior members of Trump's staff met with Anthropic CEO Dario Amodei.
1 sourceThe BBC - July 2025
Trump administration pledged to partner with tech companies to vet AI models for national security risks.
1 sourceUSA Today - Last year
President Trump signed executive orders forming the basis of the AI Action Plan.
1 sourceThe BBC - 2024
OpenAI and Anthropic established similar agreements under the Biden administration.
3 sourcesThe Guardian · USA Today · The BBC
Potential Impact
- 01
Enhanced US national security through early identification of AI risks in cybersecurity and biosecurity.
- 02
Strengthened US position in global AI development by ensuring advancements align with national interests.
- 03
Increased collaboration between tech firms and government, potentially leading to standardized AI safety practices.
- 04
Possible influence on international AI policies, as seen with Microsoft's similar UK agreement.
- 05
Potential delays in AI model releases if significant risks are identified during evaluations.
Transparency Panel
Related Stories
Trump Administration Explores Government Review of AI Models Before Public Release
The Trump administration is discussing measures to vet advanced AI models for safety and security risks prior to their release, marking a potential shift from its previous hands-off stance on AI regulation. Officials are considering an executive order to establish a working group…
thenation.comPublishing Houses, Scott Turow Sue Meta Over AI Training Data Copyright
Five major publishing houses and author Scott Turow filed a class action lawsuit against Meta and CEO Mark Zuckerberg, alleging the company illegally used millions of copyrighted books and journal articles to train its Llama AI model. The suit, filed in federal court in Manhattan…
under30ceo.comElon Musk Settles SEC Lawsuit Over Twitter Stock Disclosures for $1.5 Million
Elon Musk has settled a civil lawsuit with the U.S. Securities and Exchange Commission accusing him of delaying disclosure of his 2022 Twitter stock purchases. A trust in his name will pay a $1.5 million penalty without admitting wrongdoing. The case stemmed from an 11-day delay…