Substrate
technology

Trump Administration Explores Government Review of AI Models Before Public Release

The Trump administration is discussing measures to vet advanced AI models for safety and security risks prior to their release, marking a potential shift from its previous hands-off stance on AI regulation. Officials are considering an executive order to establish a working group for this purpose, inspired by similar processes in the UK.

FO
The New York Times
Semafor
Politico
CBS News
Responsible Statecraft
+5
11 sources·May 5, 9:30 PM(59 min ago)·2m read
|
Trump Administration Explores Government Review of AI Models Before Public ReleaseShealeah Craighead / Wikimedia (Public domain)
Audio version
Tap play to generate a narrated version.

The Trump administration is reportedly considering implementing government oversight for new artificial intelligence models before they are made publicly available, according to multiple reports. This discussion represents a potential departure from the administration's earlier noninterventionist approach to AI.

Administration officials are exploring an executive order that would create a working group comprising technology executives and government representatives to establish a formal review process. One report highlighted that such a review could provide the government with early access to models for military intelligence purposes, particularly in scenarios like AI-enabled cyberattacks.

The change in stance follows the administration's previous criticism of AI regulations, including rolling back measures from the prior administration that required safety evaluations by developers. President Trump has described certain foreign tech regulations as excessive in the past.

Recent developments have heightened worries about AI capabilities. For instance, Anthropic's new Mythos system demonstrated the ability to identify security flaws in nearly all websites, prompting the company to delay its public release.

The White House is reportedly considering vetting frontier AI models before release in response to growing security risks.

Semafor, May 5, 2026.

While the administration has maintained a laissez-faire policy on AI, public sentiment shows more concern than excitement about the technology's potential. The proposed oversight aims to address these issues by ensuring models meet safety standards before widespread availability.

Other AI firms are advancing rapidly, with capabilities expected to match or exceed current systems soon. Developers may vary in their caution, making government intervention a point of discussion. The UK's model involves reviewing frontier AI for various risks, a framework the U.S. is reportedly modeling its approach after.

If implemented, this would reverse prior deregulatory actions, such as eliminating requirements for AI safety assessments.

In separate but related tech news, OpenAI's leadership considered spinning out its robotics division but faced board resistance, potentially leading to a holding company structure. Meanwhile, Palantir reported its fastest quarterly revenue growth ever, with significant increases in government and commercial sectors.

Elon Musk settled allegations with the Securities and Exchange Commission regarding disclosure of his Twitter stake, agreeing to a $1.5 million payment without admitting wrongdoing. These events highlight ongoing regulatory scrutiny in the tech industry amid discussions on AI governance.

Key Facts

Executive order
discussed for AI working group
UK model
inspiring US AI review process
Anthropic's Mythos
delayed due to security flaws detection
Policy reversal
from noninterventionist AI approach

Story Timeline

4 events
  1. May 6, 12:02 AM ET

    5 new sources added: The Bbc, cnbc.com, The Washington Post, New York Post, Al Jazeera

    5 sourcesThe Bbc · cnbc.com · The Washington Post
  2. May 5, 2026

    Semafor reported the White House is considering vetting AI models in response to security risks from systems like Anthropic's Mythos.

    1 sourceSemafor
  3. May 4, 2026

    President Trump was photographed in Washington, D.C., amid discussions on AI oversight.

    1 sourceFortuneMagazine
  4. Recent weeks

    Anthropic delayed release of its Mythos AI system due to cybersecurity concerns.

    1 sourceSemafor

Potential Impact

  1. 01

    Tech companies will need to submit AI models for government review before release.

  2. 02

    National security assessments of AI could become standard in the US.

  3. 03

    AI development pace might slow due to added regulatory steps.

  4. 04

    International AI safety collaborations could increase with allies like the UK.

  5. 05

    Public concerns about AI risks may influence future electoral discussions.

Transparency Panel

Sources cross-referenced11 — 8/10 share a lean
Framing risk38/100 (low)
Confidence score86%
Synthesized bySubstrate AI
Word count388 words
PublishedMay 5, 2026, 9:30 PM
Bias signals removed3 across 2 outlets
Signal Breakdown
Loaded 1Speculative 1Amplifying 1

Related Stories

Publishing Houses, Scott Turow Sue Meta Over AI Training Data Copyrightthenation.com
ai59 min agoFraming55Framing risk55/100Rewrite inherits negative framing of Meta's actions through loaded verbs and phrases, with lede misdirection centering on lawsuit filing over core infringement allegations.Click to jump to full framing analysis

Publishing Houses, Scott Turow Sue Meta Over AI Training Data Copyright

Five major publishing houses and author Scott Turow filed a class action lawsuit against Meta and CEO Mark Zuckerberg, alleging the company illegally used millions of copyrighted books and journal articles to train its Llama AI model. The suit, filed in federal court in Manhattan…

fortune.com
The Washington Post
Financial Times
NPR
4 sources
Elon Musk Settles SEC Lawsuit Over Twitter Stock Disclosures for $1.5 Millionunder30ceo.com
technology59 min ago

Elon Musk Settles SEC Lawsuit Over Twitter Stock Disclosures for $1.5 Million

Elon Musk has settled a civil lawsuit with the U.S. Securities and Exchange Commission accusing him of delaying disclosure of his 2022 Twitter stock purchases. A trust in his name will pay a $1.5 million penalty without admitting wrongdoing. The case stemmed from an 11-day delay…

The New York Times
The Washington Post
The Guardian
Ars Technica
4 sources
Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trialfrance24.com
ai4 hrs ago

Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trial

OpenAI President Greg Brockman detailed a heated 2017 confrontation with Elon Musk during testimony in the federal trial Musk v. Altman. He described Musk storming around a table and grabbing a painting after rejecting shared control proposals. The lawsuit seeks $150 billion in d…

The New York Times
Wired
New York Post
3 sources