Google, Microsoft, and xAI Agree to US Government Testing of AI Models for Security Risks
The US Department of Commerce announced agreements with Google, Microsoft and xAI to test new AI models for capabilities and security risks before public release. The pacts expand on prior arrangements with OpenAI and Anthropic, with evaluations focusing on national security, cybersecurity and biosecurity. CAISI has already completed over 40 evaluations of unreleased models.
National Institute of Standards and Technology / Wikimedia (Public domain)The US Department of Commerce announced on Tuesday that Google, Microsoft and xAI have agreed to submit their new AI models for testing before public release, expanding voluntary pacts to evaluate capabilities and security risks. The agreements, reached through the Center for AI Standards and Innovation (CAISI), build on prior arrangements with OpenAI and Anthropic from the Biden administration.
AI models from Google, Microsoft, xAI, OpenAI and Anthropic will undergo evaluations for national security, cybersecurity and biosecurity risks, according to announcements from the department and multiple sources.
CAISI has conducted more than 40 evaluations of AI tools, including state-of-the-art models that remain unreleased, the center stated. Evaluations will cover testing, collaborative research and best practice development for commercial AI systems. The pacts allow the US government pre-release access to the most advanced frontier AI models from Alphabet, Microsoft and xAI, led by CAISI.
Previous agreements with Anthropic and OpenAI have been renegotiated to align with Commerce Secretary Howard Lutnick and President Trump's new directives on security reviews. Tom Lue, vice president of global AI affairs at Google DeepMind, confirmed the partnership in a social media post on Tuesday.
The agreement was announced on 5 May 2026, amid a separate Pentagon deal with seven tech companies—Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX—to use AI in classified systems.
Anthropic developed a model called Mythos, and is involved in a lawsuit with the US Department of Defense over its refusal to drop safety guardrails for government use of its models. Senior members of Trump's staff met last month with Anthropic CEO Dario Amodei. US President Donald Trump signed executive orders last year forming the basis of his administration's AI Action Plan.
The Trump administration pledged in July to partner with technology companies to vet AI models for national security risks. Agreements with OpenAI and Anthropic were made in 2024 under former President Joe Biden’s administration, when CAISI was known as the US Artificial Intelligence Safety Institute.
The AI Safety Institute was established in 2023 under the Biden administration and renamed CAISI under the Trump administration.
CAISI evaluated and tested certain state-of-the-art models that remain unreleased, completing over 40 such evaluations on Tuesday. A Pew Research Center poll last year found that 50% of Republicans and 51% of Democrats were more concerned than excited about increased use of AI in daily life. 3 percent.
Microsoft holds a Seeking Alpha Quant Rating of Strong Buy, and Google holds a Seeking Alpha Quant Rating of Hold.
Key Facts
Story Timeline
6 events- 2026-05-05
The Department of Commerce announced agreements with Google, Microsoft and xAI for AI model testing.
2 sourcesNew York Post · Al Jazeera - 2026-04
Senior members of Trump's staff met with Anthropic CEO Dario Amodei.
1 sourceThe BBC - 2025-07
The Trump administration pledged to partner with technology companies to vet AI models for national security risks.
1 sourceAl Jazeera - 2025
US President Donald Trump signed executive orders forming the basis of his administration's AI Action Plan.
1 sourceThe BBC - 2024
Agreements with OpenAI and Anthropic were made under former President Joe Biden’s administration.
1 sourceAl Jazeera - 2023
The AI Safety Institute was established under the Biden administration.
1 sourceNew York Post
Potential Impact
- 01
Enhanced US government oversight of AI development for national security.
- 02
Increased collaboration between tech firms and government on AI best practices.
- 03
Stock price fluctuations for involved companies, as seen with Microsoft and Alphabet.
- 04
Potential delays in AI model releases due to pre-deployment evaluations.
- 05
Influence on public perception of AI risks, amid ongoing concerns from polls.
Transparency Panel
Related Stories
Explosion at Fireworks Factory in China's Hunan Province Kills 26 and Injures 61
An explosion at a fireworks plant in Liuyang, Hunan province, killed at least 26 people and injured 61 on Monday afternoon. Rescue operations concluded with evacuations and detentions, while President Xi Jinping called for a swift investigation. Local authorities halted all firew…
Science NewsHantavirus Outbreak on Cruise Ship Kills Three, Sickens Seven as WHO Investigates Transmission
An outbreak of hantavirus on the MV Hondius cruise ship has resulted in three deaths and seven cases, with two confirmed by lab tests. The World Health Organization is investigating possible rare human-to-human transmission while the vessel remains off Cape Verde. The ship, which…
South Korea's Consumer Prices Increase 2.6% in April Amid Rising Fuel Costs
South Korea's consumer inflation accelerated to 2.6% year-over-year in April, matching estimates and marking the fastest pace since July 2024. The rise was primarily driven by a 21.9% surge in petroleum product prices, attributed to global oil supply disruptions from the closure…