Half of UK Children Own AI Toys, Survey Finds
A survey by the British Standards Institution found that 50 per cent of children aged 16 and under in the UK own at least one AI-enabled toy or device. Nearly half of parents believe their children would be better off without AI access, and most expressed concerns about data risks and content exposure.
The IndependentHalf of children across the UK own an AI-enabled toy or device, according to a survey conducted by the British Standards Institution to mark its 125th anniversary. The poll found that 50 per cent of children aged 16 and under have received at least one such product, including interactive robots or smart tablets.
Almost half of parents, or 47 per cent, said their child would be better off growing up without any access to AI. At the same time, 75 per cent expressed concern that internet-connected AI toys could expose children to unwanted content or data vulnerabilities.
The survey revealed a contrast in risk perceptions. Some 54 per cent of parents said they would be more likely to allow their child to play with an AI-enabled toy unsupervised than to play outside on the street without an adult, which 51 per cent of parents viewed as riskier.
A smaller share, 46 per cent, said they would allow children to visit local shops or parks alone.
Fewer than half of parents, 46 per cent, believe their child could distinguish between a human and an AI response. Only 43 per cent think their child could accurately assess information provided by an AI chatbot. A total of 78 per cent of parents are worried that these devices might respond to sensitive questions without appropriate oversight, while 70 per cent fear AI systems could praise or criticise behaviour in ways that may not be suitable or safe.
Nine in 10 parents, or 91 per cent, said a recognised safety certification or mark for AI toys would be important, with 29 per cent calling it essential. In addition, 83 per cent believe manufacturers should follow established standards or codes of conduct, and 72 per cent want clearer details on whether products meet safety and security requirements.
Existing toy safety marks such as CE or UKCA focus on physical hazards like choking hazards. Some devices meet information security standards such as ISO 27001. However, no widely recognised framework currently addresses the specific safety, behavioural and developmental issues linked to AI in children's products.
The UK government has published a proposed new product safety framework that seeks to address risks of harm connected to AI, including in children's toys. The survey was carried out by Focaldata and polled 1,000 UK parents in April.
Key Facts
Story Timeline
2 events- April 2026
Focaldata polled 1,000 UK parents for the BSI survey.
1 sourceThe Independent - 2026-05-11
The British Standards Institution released survey findings on children's AI toy ownership.
1 sourceThe Independent
Potential Impact
- 01
The new government product safety framework may include specific rules for children's AI products.
- 02
Parents may delay purchasing AI toys until dedicated safety standards are introduced.
- 03
Manufacturers could face pressure to adopt voluntary codes of conduct for AI toys.
- 04
Demand for clearer labelling on data collection and content filtering in AI toys may increase.
Transparency Panel
Related Stories
Abc NewsRussia Test-Fires New Sarmat ICBM
President Vladimir Putin hailed the launch of the new intercontinental ballistic missile, which is scheduled to enter service by year's end. The test comes days after a Red Square parade without heavy weapons and follows the expiration of the last U.S.-Russia nuclear arms pact.
under30ceo.comU.S. Stock Futures Rise as Trump Begins China Visit
U.S. equity futures advanced on May 13, 2026, with S&P futures up 0.2 percent and Nasdaq futures gaining 0.7 percent. President Trump arrived in Beijing for a summit with Chinese President Xi Jinping, accompanied by business leaders including Nvidia CEO Jensen Huang and Elon Musk…
Abc NewsMeta Launches 'Incognito' Mode for WhatsApp AI Chats That Leaves No Trace or Access for Company or User
Meta Platforms is rolling out an incognito mode for WhatsApp that processes chats with Meta AI in a secure environment inaccessible to the company. Messages disappear by default when users exit a session and the feature requires age confirmation. The rollout comes as Meta AI reac…