Report: Explicit Content Recommended to Simulated 13-Year-Old X Accounts
A report by the Centre for Countering Digital Hate tested X's content policies using simulated UK-based 13-year-old accounts. The tests showed explicit material in search results and recommendations. The findings relate to compliance with the UK's Online Safety Act.
timesofindia.indiatimes.comA report published by the Centre for Countering Digital Hate examined X's handling of adult content for young users. Researchers created two accounts simulating 13-year-old boys and girls in the UK to assess search results, algorithms, profiles, and Communities.
The study aimed to evaluate X's adult content policy under the Online Safety Act, which requires platforms to protect children from harmful material.
' The report stated that 80 percent of these searches returned explicit results within the first ten media-containing outcomes. Examples included videos and images of oral sex and a woman masturbating on a sofa, with no age verification, content warnings, or filters applied. 5 percent.
The accounts joined 15 out of 20 tested adult sexual communities without restrictions. These communities, such as 'Virgin Trades,' 'Onlyfans virgin club +18,' 'Kink Kings & Queens,' and 'Goon Group,' contained posts with sexual and transactional content, including offers of nude images.
Testing Direct Interactions To assess direct contact risks, researchers liked posts in communities where users offered messages for engagement.
The teen accounts received direct message requests from adult accounts, bypassing default settings that limit messages to followed users. One such message included an unsolicited video of a man masturbating. The Online Safety Act, enforced since mid-2023, mandates age limits and safeguards against harmful content for child users.
Non-compliant platforms face fines of 10 percent of global annual revenue or £18 million, whichever is higher. Adult sites like PornHub have implemented age verification for UK visitors as a result. Ofcom, the UK's communications regulator, has investigated over 100 platforms, including X, for compliance.
Earlier this year, Ofcom probed X's Grok AI for generating sexual deepfakes, including of children. An NBC News report on April 14, 2026, indicated that Grok continued to produce such content despite X's stated measures.
Regulatory and Expert Responses The Children's Commissioner for England stated in August 2025 that X is the most common source of pornography for children, surpassing dedicated sites.
Callum Hood, head of research at the Centre for Countering Digital Hate UK, said the findings demonstrate that X's For You feed recommends explicit content to young users after searches. He noted that changes to account settings allow adults to message minors directly, potentially exposing them to further material.
“These findings show X will quickly reshape its ‘For You’ feed to recommend explicit content to young users. Worse, with a single change to account settings, adults can directly message them, leaving children exposed to more explicit sexual material and the risk of grooming." — Callum Hood, Centre for Countering Digital Hate UK Hood added that the platform's setup creates a pathway from curiosity to repeated exposure and adult contact, increasing grooming risks. Nearly a year after enforcement began, the report concludes X has not fully complied with the Act. An Ofcom spokesperson stated that protecting children is a priority and platforms must use age checks to prevent access to pornography. The regulator has issued over a dozen fines for non-compliance and plans enforcement actions against violators. A Department for Science, Innovation and Technology spokesperson described the findings as disturbing and emphasized platforms' responsibilities under the Act. Ofcom has issued over £3 million in fines and has full backing for further actions. The department launched a consultation on measures including age limits for AI chatbots, games, and a potential social media ban for children. The Independent sought comment from X but received no response by the report's publication.”
Story Timeline
5 events- 2026-04-15
Centre for Countering Digital Hate publishes report on X's exposure of simulated teen accounts to explicit content.
1 sourceThe Independent - 2026-04-15
NBC News reports Grok AI chatbot still generating sexual images and videos.
1 sourceNBC News - 2026 (earlier this year)
Ofcom opens investigation into Grok AI for sexual deepfakes.
1 sourceunattributed - 2025-08
Children’s Commissioner for England warns X is main source of child pornography exposure.
1 sourceChildren’s Commissioner for England - 2025-04-15 (approx.)
Online Safety Act enforcement begins.
1 sourceunattributed
Potential Impact
- 01
Increased regulatory scrutiny on X's algorithm and community features.
- 02
Heightened awareness of X as primary child pornography source per commissioner.
- 03
Potential fines for X under Online Safety Act for failing age protections.
- 04
Broader adoption of age verification by UK platforms following PornHub example.
- 05
UK consultation may lead to social media bans or AI safeguards for children.
Multi-source corroboration verifies facts, not framing. This panel scores the Substrate rewrite you just read (top score) and the raw source bundle it came from. A positive delta means the rewrite stripped framing from the sources; a negative or zero delta means our neutralizer let some through.
X's algorithm reflects user-driven searches, and while safeguards have gaps, ongoing Ofcom investigations and platform updates show proactive efforts to enhance child protections.
- Valence skewnotable“videos and images of oral sex and a woman masturbating on a sofa”graphic details amplify negative portrayal of platform's exposure risksAdjectives and adverbs systematically slant toward one interpretation even though the underlying facts are neutral.
- Selective sourcingnotable“quotes from CCDH, Children's Commissioner, Ofcom, DSIT; no X response”all sources critical, opposition view absent despite outreachEvery quoted expert shares one viewpoint; no counter-expert is given meaningful space.
- Loaded metaphorminor“creates a pathway from curiosity to repeated exposure and adult contact”metaphor frames platform as enabling grooming progressionSources share the same narrative framing verbs (“sow doubt”, “spark backlash”) — a sign of a shared template, not independent reporting.
- Omitted counterpointminor“no mention of X's potential defenses or compliance efforts”ignores possible platform mitigations for balanced viewA reasonable alternative reading of the facts isn't represented anywhere in the source bundle.
Transparency Panel
Related Stories
Abc NewsUniversity of Michigan Next President Declines Role Due to Brain Cancer Diagnosis
Kent Syverud, chancellor at Syracuse University, announced he cannot assume the presidency of the University of Michigan after a recent brain cancer diagnosis. He will instead join the university as a law professor and adviser. The university's interim president will continue in…
larrybrownsports.comGiants Quarterback Jaxson Dart and Girlfriend Marissa Ayers Mark Her 23rd Birthday in New York
Marissa Ayers, girlfriend of New York Giants quarterback Jaxson Dart, reflected on a transformative year in a TikTok video while preparing for her 23rd birthday dinner. The couple, who confirmed their relationship publicly in January 2026, opted for a relaxed weekend in New York…
Abc NewsJackson Criticizes Colleagues' Emergency Orders
Supreme Court Justice Ketanji Brown Jackson delivered a speech at Yale Law School critiquing her conservative colleagues' use of emergency orders that have benefited the Trump administration. She described the orders as superficial and oblivious to real-world impacts on people. J…