Substrate
ai

NHS England Restricts Access to Software Code Over AI Security Concerns

NHS England has issued new guidance requiring staff to make software repositories private by default, citing risks from advanced AI models like Mythos. The move reverses prior open-source policies despite expert views that it is unnecessary. A deadline of May 11 has been set for compliance.

New Scientist
disabilityscoop.com
cbsnews.com
khn.org
4 sources·May 1, 3:39 PM(4 days ago)·1m read
|
NHS England Restricts Access to Software Code Over AI Security Concernsnewscientist.com
Audio version
Tap play to generate a narrated version.

NHS England is restricting public access to its software code in response to perceived hacking risks from artificial intelligence models. The organization has issued guidance to staff, requiring that all source code repositories be private by default.

This change applies to existing and future software, with public access allowed only for explicit and approved needs. The guidance specifies a deadline of May 11 for making repositories private. It references advancements in AI, including the Mythos model developed by Anthropic, as increasing the risk of exploitation through code ingestion and inference.

Previously, NHS software was made open-source on platforms like GitHub, as it is created with public funds, allowing other organizations to reuse and improve it.

Security experts have stated that the policy change is unnecessary. Terence Eden, who has experience in the UK Civil Service on open data access, said the move lacks logical sense. He noted that open-source software is more secure due to community checks and that much NHS software is not security-critical.

Eden added that since the code has been public for years, it remains available in backups and downloads.

The new measures contradict the NHS service standard, which requires software produced with public money to be open-source to avoid duplication and promote better services. For example, public code for the Horizon IT system might have prevented a prolonged scandal involving wrongful accusations.

A spokesperson for NHS England stated that the restriction is temporary to strengthen cyber security while assessing AI developments. The organization plans to continue publishing code where there is a clear need.

Key Facts

May 11 deadline
for making NHS code repositories private
Mythos AI
cited as reason for policy change
AISI finding
Mythos attacks only weak systems
Open-source standard
previously required for public-funded software
Temporary restriction
to assess AI impacts on security

Story Timeline

3 events
  1. 2026-05-01

    NHS England issues guidance to make software repositories private by default due to AI risks.

    1 source@NewScientist
  2. 2026-04

    Anthropic's Mythos AI is reported capable of discovering software flaws.

    1 source@NewScientist
  3. Prior to 2026

    NHS software was made open-source on GitHub as per service standards.

    1 source@NewScientist

Potential Impact

  1. 01

    Other organizations may face delays in building on NHS software, increasing development costs.

  2. 02

    Community contributions to NHS code could decrease, slowing improvements.

  3. 03

    Reduced transparency could limit public trust in NHS digital services.

  4. 04

    Policy may prompt similar restrictions in other UK public sectors assessing AI risks.

Transparency Panel

Sources cross-referenced4
Framing risk55/100 (moderate)
Confidence score98%
Synthesized bySubstrate AI
Word count268 words
PublishedMay 1, 2026, 3:39 PM
Bias signals removed3 across 2 outlets
Signal Breakdown
Loaded 1Amplifying 1Editorializing 1

Related Stories

Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trialnaturalnews.com
ai13 min agoUpdated

Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trial

OpenAI President Greg Brockman detailed a heated 2017 confrontation with Elon Musk during testimony in the federal trial Musk v. Altman. He described Musk storming around a table and grabbing a painting after rejecting shared control proposals. The lawsuit seeks $150 billion in d…

The New York Times
Wired
New York Post
BBC News
Business Insider
+4
10 sources
Italian Prime Minister Meloni Warns of AI-Generated Deepfakes and Shares Altered ImagePrime Minister's Office / Wikimedia (GODL-India)
ai2 hrs agoDeveloping

Italian Prime Minister Meloni Warns of AI-Generated Deepfakes and Shares Altered Image

Italian Prime Minister Giorgia Meloni highlighted risks from AI-generated fake images, noting one depicting her in underwear and urging verification of online content. She filed a libel suit two years ago over similar deepfake images. Meanwhile, U.S. Secretary of State Marco Rubi…

The Independent
1 source
Richard Dawkins Claims AI Chatbot Shows Signs of Consciousness After Three-Day ConversationsKarl Withakay / Wikimedia (CC BY-SA 4.0)
ai4 hrs ago

Richard Dawkins Claims AI Chatbot Shows Signs of Consciousness After Three-Day Conversations

Evolutionary biologist Richard Dawkins engaged in three-day discussions with an AI bot named Claudia, leading him to state that AIs are conscious and human-like. He shared unpublished work and philosophical reflections with the bot, which responded with poems and praise.

ZE
The Guardian
OilPrice.com
3 sources