Substrate
technology

Anthropic Restricts Claude Mythos AI Access to Select Organizations After Hacking Demonstration

Anthropic has revealed its AI model Claude Mythos excels at hacking and is restricting access to governments, tech giants, and banks. UK cyber officials, including NCSC head Richard Horne, stated that such advanced AI can enhance public cyber-security if secured from misuse. Security Minister Dan Jarvis urged AI firms to collaborate with the government on national cyber-defense.

BBC News
thezvi.wordpress.com
2 sources·Apr 21, 9:43 PM(14 days ago)·2m read
|
Anthropic Restricts Claude Mythos AI Access to Select Organizations After Hacking Demonstrationitworldcanada.com
Audio version
Tap play to generate a narrated version.
Developing·Limited corroboration so far. This page will refresh as more sources emerge.

Anthropic revealed that its AI model Claude Mythos is extremely good at hacking. The company claimed that Claude Mythos is an expert hacker as good as, if not better than, the best humans. Anthropic is restricting access to the Claude Mythos model to governments, tech giants, and banks.

Anthropic has not announced a release date for its newest model Mythos. The company, which is the maker of the chatbot Claude, sparked these developments as the cyber-security world prepares for potential impacts. BBC News reported that the threat of AI such as Claude Mythos has made headlines around the world.

Richard Horne, head of the National Cyber Security Centre (NCSC), stated that advanced AI tools can be a 'net positive' to public cyber-security if secured from misuse. Horne will speak at the NCSC's annual conference CyberUK on Wednesday. In his speech, Horne will argue that AI tools can make things safer and more secure.

Horne will urge companies and organisations to ensure they are doing the basics of cyber-security right. ' Horne urged people to update software on their systems and upgrade legacy IT. Horne also urged AI companies to follow newly-created European safety guidelines for their models.

The UK's security minister urged AI companies to work with the government on national cyber-defence capabilities. Security Minister Dan Jarvis will speak at the CyberUK event. Jarvis will implore AI firms to work with the government on using AI to protect critical networks from attackers.

All the most powerful and advanced AI models known as frontier AI are developed outside of the UK. The top-tier companies developing frontier AI are based in the US or China. 4 Cyber.

The NCSC warns of ongoing threats of nation state and hacktivist attacks from Russia and China. The NCSC states that cyber is now 'the home front' of defence in the UK. The NCSC references recent events such as the Iran attacks as showing cyber's role in modern conflicts.

BBC News reported that the speeches at CyberUK will press home these ongoing threats. Horne's warnings echo messages about ensuring software updates and upgrading legacy IT.

Key Facts

Anthropic restricts AI model access
Access to Claude Mythos limited to governments, tech giants, and banks due to its hacking expertise.
UK cyber chief's positive view on AI
Richard Horne stated advanced AI can be a 'net positive' for cyber-security if secured.
Government urges AI collaboration
Security Minister Dan Jarvis implored AI firms to work on national cyber-defense.
Frontier AI development location
All powerful AI models developed outside UK, mainly in US or China.
NCSC warnings on threats
Ongoing nation state attacks from Russia and China, with cyber as UK's 'home front'.

Story Timeline

6 events
  1. 2026-04-21

    Anthropic revealed Claude Mythos's hacking capabilities and restricted access.

    1 sourceBBC News
  2. 2026-04-20

    Richard Horne prepared speech for CyberUK conference urging AI security basics.

    1 sourceBBC News
  3. 2026-04-19

    UK Security Minister Dan Jarvis planned to implore AI firms for government collaboration.

    1 sourceBBC News
  4. Recent days

    Media coverage highlighted frontier AI enabling vulnerability exploitation.

    1 sourceBBC News
  5. Recent events

    NCSC referenced Iran attacks showing cyber's role in conflicts.

    1 sourceBBC News
  6. Ongoing

    NCSC warned of threats from Russia and China.

    1 sourceBBC News

Potential Impact

  1. 01

    Potential exposure of vulnerabilities if AI tools are misused.

  2. 02

    Heightened focus on basic cyber-security practices amid AI advancements.

  3. 03

    Enhanced cyber-security for entities with access to restricted AI models.

  4. 04

    Increased collaboration between UK government and AI firms on defense capabilities.

  5. 05

    Adoption of European safety guidelines by AI companies.

Transparency Panel

Sources cross-referenced2
Framing risk55/100 (moderate)
Confidence score65%
Synthesized bySubstrate AI
Word count349 words
PublishedApr 21, 2026, 9:43 PM
Bias signals removed3 across 3 outlets
Signal Breakdown
Loaded 3

Related Stories

Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trialnaturalnews.com
ai3 min agoUpdated

Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trial

OpenAI President Greg Brockman detailed a heated 2017 confrontation with Elon Musk during testimony in the federal trial Musk v. Altman. He described Musk storming around a table and grabbing a painting after rejecting shared control proposals. The lawsuit seeks $150 billion in d…

The New York Times
Wired
New York Post
BBC News
Business Insider
+3
9 sources
Trump Administration Explores Government Review of AI Models Before Public ReleaseShealeah Craighead / Wikimedia (Public domain)
technology3 min agoUpdated

Trump Administration Explores Government Review of AI Models Before Public Release

The Trump administration is discussing measures to vet advanced AI models for safety and security risks prior to their release, marking a potential shift from its previous hands-off stance on AI regulation. Officials are considering an executive order to establish a working group…

FO
The New York Times
Semafor
Politico
CBS News
+6
12 sources
Elon Musk Settles SEC Lawsuit Over Twitter Stock Disclosures for $1.5 MillionArs Technica
technology2 hrs ago

Elon Musk Settles SEC Lawsuit Over Twitter Stock Disclosures for $1.5 Million

Elon Musk has settled a civil lawsuit with the U.S. Securities and Exchange Commission accusing him of delaying disclosure of his 2022 Twitter stock purchases. A trust in his name will pay a $1.5 million penalty without admitting wrongdoing. The settlement avoids repayment of an…

The New York Times
The Washington Post
The Guardian
Ars Technica
4 sources