Substrate
technology

Metropolitan Police AI Pilot Uncovers 598 Potential Misconduct Cases Among Officers

The Metropolitan Police used an AI program from Palantir to uncover misconduct, leading to investigations of 100 officers and warnings for 16. The week-long pilot analyzed internal data on sickness, overtime and complaints, revealing abuses including fraud and sexual harassment. Sir Mark Rowley highlighted the findings as part of ongoing integrity efforts.

The Times
1 source·Apr 24, 10:26 PM(11 days ago)·1m read
|
Metropolitan Police AI Pilot Uncovers 598 Potential Misconduct Cases Among OfficersThe Times
Audio version
Tap play to generate a narrated version.
Developing·Limited corroboration so far. This page will refresh as more sources emerge.

The Metropolitan Police deployed an artificial intelligence program to uncover misconduct and criminality across the force, identifying officers involved in abuse of authority for sexual purposes, fraud and sexual assault. The AI program, supplied by the US technology company Palantir, interrogated internal systems tracking sickness, overtime, expenses, building access and public complaints.

The analysis revealed misuse of internal systems by senior officers, including false overtime claims, manipulation of shift data to secure additional leave and misrepresentation of home working.

The pilot, conducted without the knowledge of officers or staff, lasted one week and uncovered evidence of sexual harassment and exploitation of HR systems for financial gain. The program analyzed years of internal data to identify patterns of misconduct. As a result, 100 officers are under investigation for gross misconduct, and 16 officers have received warning notices.

In total, 598 cases relate to abuse of the IT shift system for personal or financial benefit. About 42 senior officers, from chief inspector to chief superintendent, face losing their jobs after falsely claiming to be in the office while working from home in breach of Metropolitan Police guidelines.

Twelve officers face gross misconduct proceedings for failing to declare membership of the Freemasons.

Three officers have been suspended, and two officers have been arrested. A further 30 cases have been flagged for suspicious behaviour, which the Metropolitan Police states are currently uncorroborated. Sir Mark Rowley set up the initiative after the Charing Cross scandal in which BBC Panorama filmed officers making sexist and racist comments.

Sir Mark Rowley took office in 2022 and has overseen the dismissal of 1,500 officers since taking office. Sir Mark Rowley said: “We’ve made all this effort on integrity … but we’ve still got further to dig down for the people who are not determined to change … Those numbers are extraordinary.

Key Facts

AI pilot uncovers misconduct
The week-long AI program identified 598 cases of IT shift system abuse, 100 officers under investigation, and 42 senior officers facing job loss.
Specific wrongdoing revealed
Analysis showed false overtime claims, sexual harassment, fraud, and failure to declare Freemasons membership among officers.
Actions taken
Three officers suspended, two arrested, 16 received warnings, and 30 uncorroborated cases flagged.
Leadership context
Sir Mark Rowley initiated the program after 2022 Charing Cross scandal and has dismissed 1,500 officers since taking office.

Story Timeline

4 events
  1. 2026-04-24

    The Metropolitan Police deployed an AI program for a one-week pilot to uncover misconduct, leading to investigations and arrests.

    1 sourceThe Times
  2. 2022

    Sir Mark Rowley took office as Metropolitan Police commissioner and set up the AI initiative following the Charing Cross scandal.

    1 sourceThe Times
  3. Post-2022

    Sir Mark Rowley has overseen the dismissal of 1,500 officers since taking office.

    1 sourceThe Times
  4. Pre-2022

    The Charing Cross scandal occurred, involving BBC Panorama filming officers making sexist and racist comments.

    1 sourceThe Times

Potential Impact

  1. 01

    Potential dismissal of up to 42 senior officers and others, affecting Metropolitan Police staffing and operations.

  2. 02

    Improved integrity within the force, building on dismissal of 1,500 officers since 2022.

  3. 03

    Expansion of AI use in policing for crime data analysis, as considered by the Metropolitan Police.

  4. 04

    Criticism from Police Federation on AI tools, potentially leading to policy reviews or legal challenges.

Transparency Panel

Sources cross-referenced1
Framing risk18/100 (low)
Confidence score65%
Synthesized bySubstrate AI
Word count303 words
PublishedApr 24, 2026, 10:26 PM
Bias signals removed3 across 3 outlets
Signal Breakdown
Loaded 2sensationalism 1

Related Stories

Major Publishers and Author Sue Meta Over Alleged Use of Copyrighted Works in Llama AI Traininginsurancejournal.com
technology2 hrs agoUpdated

Major Publishers and Author Sue Meta Over Alleged Use of Copyrighted Works in Llama AI Training

Five major publishing houses and author Scott Turow filed a lawsuit against Meta in Manhattan federal court, accusing the company of pirating millions of copyrighted works to train its Llama AI models. The suit claims Meta CEO Mark Zuckerberg personally authorized the infringemen…

The Independent
fortune.com
The Washington Post
The Guardian
The Verge
+1
6 sources
Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trialnaturalnews.com
ai2 hrs agoUpdated

Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trial

OpenAI President Greg Brockman detailed a heated 2017 confrontation with Elon Musk during testimony in the federal trial Musk v. Altman. He described Musk storming around a table and grabbing a painting after rejecting shared control proposals. The lawsuit seeks $150 billion in d…

The New York Times
Wired
New York Post
BBC News
Business Insider
+4
10 sources
Trump Administration Explores Government Review of AI Models Before Public ReleaseShealeah Craighead / Wikimedia (Public domain)
technology4 hrs agoUpdated

Trump Administration Explores Government Review of AI Models Before Public Release

The Trump administration is discussing measures to vet advanced AI models for safety and security risks prior to their release, marking a potential shift from its previous hands-off stance on AI regulation. Officials are considering an executive order to establish a working group…

FO
The New York Times
Semafor
Politico
CBS News
+6
12 sources