“Deepfake detection challenge” shows pace of synthetic media — and why technology sector standards, plus everyday verification by journalists, are critical to public safety and trust
A “deepfake detection challenge” has highlighted how quickly synthetic audio and video are evolving – and why shared technology sector standards, alongside trusted journalism, are essential.
In 2025 alone, 8,000,000 deepfakes are estimated to have been shared, up from 500,000 in 2023. The AI-generated fakes – video, still images and audio – are used in scams, impersonation and harassment, including harmful sexualised content.
Representatives of the News Media Coalition observed a four-day exercise in February at Microsoft’s London headquarters, where more than 350 participants – including INTERPOL, members of the Five Eyes community and big tech – were immersed in high-pressure scenarios and asked to identify large volumes of truthful, fake and partially manipulated media. The UK government said the scenarios reflected pressing risks including “victim identification, election security, organised crime, impersonation and fraudulent documentation.”

Jess Phillips MP, Minister for Safeguarding and Violence Against Women and Girls, who attended the UK’s Deepfake Detection Challenge in London.
For Minister for Safeguarding and Violence Against Women and Girls Jess Phillips, the threat is both widespread and personal: “A grandmother deceived by a fake video of her grandchild. A young woman whose image is manipulated without consent. A business defrauded by criminals impersonating executives. This technology does not discriminate. The devastation of being deepfaked without consent or knowledge is unmatched, and I have experienced it firsthand.”
Deepfakes are hard to beat because attackers can mix authentic and synthetic material, tailor content to a single target, and spread it at speed. Even strong detectors can struggle once content is compressed, re-uploaded, or subtly edited again, and false positives can wrongly implicate real people.
That is why the government is developing a deepfake detection evaluation framework: to benchmark tools against real-world threats and map where gaps still remain, so law enforcement, platforms and others can act with clearer confidence.
Tech Secretary Liz Kendall warned: “Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear.” She added that “detection is only part of the solution”, pointing to new offences targeting non-consensual intimate deepfakes and action against “nudification” tools.
Andrew Moger, the CEO of the News Media Coalition, said: “Given the rolling tsunamis of misleading and harmful content produced by the corrupt and the criminal needs all sectors to play their role in ensuring the public are safe and sure about what they are exposed to online.” He added: “In addition to long-form investigations, we believe that every day news reporting and news coverage, what we call Primary Source Journalism, is one of the biggest societal tools for helping the public navigate through to trustworthy information”. In short: technology can help spot the fake, but people still need reliable ways to decide what is real.