Substrate
technology

Half of UK Children Own AI Toys, Survey Finds

A survey by the British Standards Institution found that 50 per cent of children aged 16 and under in the UK own at least one AI-enabled toy or device. Nearly half of parents believe their children would be better off without AI access, and most expressed concerns about data risks and content exposure.

The Independent
1 source·May 10, 11:01 PM(2 days ago)·2m read
Half of UK Children Own AI Toys, Survey FindsThe Independent
Audio version
Tap play to generate a narrated version.
Developing·Limited corroboration so far. This page will refresh as more sources emerge.

Half of children across the UK own an AI-enabled toy or device, according to a survey conducted by the British Standards Institution to mark its 125th anniversary. The poll found that 50 per cent of children aged 16 and under have received at least one such product, including interactive robots or smart tablets.

Almost half of parents, or 47 per cent, said their child would be better off growing up without any access to AI. At the same time, 75 per cent expressed concern that internet-connected AI toys could expose children to unwanted content or data vulnerabilities.

The survey revealed a contrast in risk perceptions. Some 54 per cent of parents said they would be more likely to allow their child to play with an AI-enabled toy unsupervised than to play outside on the street without an adult, which 51 per cent of parents viewed as riskier.

A smaller share, 46 per cent, said they would allow children to visit local shops or parks alone.

Fewer than half of parents, 46 per cent, believe their child could distinguish between a human and an AI response. Only 43 per cent think their child could accurately assess information provided by an AI chatbot. A total of 78 per cent of parents are worried that these devices might respond to sensitive questions without appropriate oversight, while 70 per cent fear AI systems could praise or criticise behaviour in ways that may not be suitable or safe.

Nine in 10 parents, or 91 per cent, said a recognised safety certification or mark for AI toys would be important, with 29 per cent calling it essential. In addition, 83 per cent believe manufacturers should follow established standards or codes of conduct, and 72 per cent want clearer details on whether products meet safety and security requirements.

Existing toy safety marks such as CE or UKCA focus on physical hazards like choking hazards. Some devices meet information security standards such as ISO 27001. However, no widely recognised framework currently addresses the specific safety, behavioural and developmental issues linked to AI in children's products.

The UK government has published a proposed new product safety framework that seeks to address risks of harm connected to AI, including in children's toys. The survey was carried out by Focaldata and polled 1,000 UK parents in April.

Key Facts

50% of UK children
own AI toy or device
47% of parents
prefer no AI access for children
75% of parents
concerned about data and content risks
91% of parents
want recognised AI toy safety mark
No dedicated AI toy framework
exists for behavioural and developmental risks

Story Timeline

2 events
  1. April 2026

    Focaldata polled 1,000 UK parents for the BSI survey.

    1 sourceThe Independent
  2. 2026-05-11

    The British Standards Institution released survey findings on children's AI toy ownership.

    1 sourceThe Independent

Potential Impact

  1. 01

    The new government product safety framework may include specific rules for children's AI products.

  2. 02

    Parents may delay purchasing AI toys until dedicated safety standards are introduced.

  3. 03

    Manufacturers could face pressure to adopt voluntary codes of conduct for AI toys.

  4. 04

    Demand for clearer labelling on data collection and content filtering in AI toys may increase.

Transparency Panel

Sources cross-referenced1
Confidence score65%
Synthesized bySubstrate AI
Word count398 words
PublishedMay 10, 2026, 11:01 PM
Bias signals removed4 across 2 outlets
Signal Breakdown
Loaded 1Framing 1Amplifying 1Editorializing 1

Related Stories

Russia Test-Fires New Sarmat ICBMAbc News
technology3 hrs agoFraming55Framing risk55/100Rewrite largely sticks to factual modernization details but inherits Western-loaded descriptors and buries counter-context on parity and US response.Click to jump to full framing analysis

Russia Test-Fires New Sarmat ICBM

President Vladimir Putin hailed the launch of the new intercontinental ballistic missile, which is scheduled to enter service by year's end. The test comes days after a Red Square parade without heavy weapons and follows the expiration of the last U.S.-Russia nuclear arms pact.

Abc News
FI
Techcrunch
DA
The Verge
+2
7 sources
U.S. Stock Futures Rise as Trump Begins China Visitunder30ceo.com
technology5 hrs agoDeveloping

U.S. Stock Futures Rise as Trump Begins China Visit

U.S. equity futures advanced on May 13, 2026, with S&P futures up 0.2 percent and Nasdaq futures gaining 0.7 percent. President Trump arrived in Beijing for a summit with Chinese President Xi Jinping, accompanied by business leaders including Nvidia CEO Jensen Huang and Elon Musk…

ZeroHedge
1 source
Meta Launches 'Incognito' Mode for WhatsApp AI Chats That Leaves No Trace or Access for Company or UserAbc News
ai1 hr agoFraming65Framing risk65/100Rewrite inherits heavy consensus framing that foregrounds privacy virtue while burying expert warnings on irreversible harm and accountability gaps.Click to jump to full framing analysis

Meta Launches 'Incognito' Mode for WhatsApp AI Chats That Leaves No Trace or Access for Company or User

Meta Platforms is rolling out an incognito mode for WhatsApp that processes chats with Meta AI in a secure environment inaccessible to the company. Messages disappear by default when users exit a session and the feature requires age confirmation. The rollout comes as Meta AI reac…

Abc News
BBC News
TechCrunch
Wired
4 sources