Law enforcement needs to fight fire with fire on AI threats
A report from the UK's national data science institute calls for a new AI-focused taskforce


UK law enforcement agencies have been urged to employ a more proactive approach to AI-related cyber crime as threats posed by the technology accelerate.
The Alan Turing Institute, the UK's national institute for data science and artificial intelligence, is calling for the creation of a dedicated AI Crime Taskforce within the National Crime Agency.
Research from the institute shows there's been a substantial acceleration in AI-enabled crime - particularly financial crime, phishing, and romance scams.
"AI-enabled crime is already causing serious personal and social harm and big financial losses," said Joe Burton, professor of international security at Lancaster University and one of the authors of the report.
"We need to get serious about our response and give law enforcement the necessary tools to actively disrupt criminal groups. If we don’t, we’re set to see the rapid expansion of criminal use of AI technologies."
The report warned that Chinese innovation in frontier AI is having a worrying effect, with criminals exploiting new open weight systems with fewer guardrails to carry out more advanced tasks.
And it's the ability of AI to automate, augment, and rapidly scale the volume of criminal activity that's behind the rise in AI-enabled crime. These tools are being used and shared by state, private sector, and criminal groups, which is prompting a surge in activity..
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"As AI tools continue to advance, criminals and fraudsters will exploit them, challenging law enforcement and making it even more difficult for potential victims to distinguish between what’s real and what’s fake," said Ardi Janjeva, senior research associate at the Alan Turing Institute and a co-author of the report.
"It’s crucial that agencies fighting crime develop effective ways to mitigate this, including combatting AI with AI."
Fighting fire with fire
AI is already being used to identify malicious actors in real time, support the analysis of large volumes of text data, and counter AI-generated deepfakes, phishing scams, and misinformation campaigns, according to the institute.
The authors of the report have called for a sharpened focus on extending this and to equip agencies with more advanced tools.
Central to this is the creation of a new AI Crime Taskforce within the National Crime Agency’s National Cyber Crime Unit, to coordinate a national response to AI-enabled crime.
It should collect data from across UK law enforcement to monitor and log criminal groups’ use of AI, working with national security and industry partners on strategies, and work fast to scale up the adoption of AI tools to disrupt criminal networks proactively.
Cooperation with European and international law enforcement partners needs to be improved. This, the report noted, will ensure there's compatibility in approaches to deterring, disrupting, and pursuing criminal groups leveraging AI.
The report also calls for a new working group in Europol focused on AI-enabled crime.
Law enforcement shouldn't just be mapping AI tools in policing, however. The report suggested agencies should also be logging the tools that are being misused for criminal purposes on a new central database within the proposed AI Crime Taskforce.
All this, for course, will take time, expertise and money, and the report acknowledges that there's a shortage of skills within law enforcement, as well as gaps in regulation.
As a result, enhanced training will be needed to fix that, the authors said.
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
RSAC Conference 2025: The front line of cyber innovation
ITPro Podcast Ransomware, quantum computing, and an unsurprising focus on AI were highlights of this year's event
-
Anthropic CEO Dario Amodei thinks we're burying our heads in the sand on AI job losses
News With AI set to hit entry-level jobs especially, some industry execs say clear warning signs are being ignored
-
AI security blunders have cyber professionals scrambling
News Growing AI security incidents have cyber teams fending off an array of threats
-
CISOs bet big on AI tools to reduce mounting cost pressures
News AI automation is a top priority for CISOs, though data quality, privacy, and a lack of in-house expertise are common hurdles
-
The FBI says hackers are using AI voice clones to impersonate US government officials
News The campaign uses AI voice generation to send messages pretending to be from high-ranking figures
-
Almost a third of workers are covertly using AI at work – here’s why that’s a terrible idea
News Employers need to get wise to the use of unauthorized AI tools and tighten up policies
-
SonicWall CEO Bob VanKirk hails ‘pivotal moment’ as firm unveils new MSP cyber solutions
News The company is expanding its MSP solutions range and ramping up its focus on platform-based security
-
‘We are now a full-fledged powerhouse’: Two years on from its Series B round, Hack the Box targets further growth with AI-powered cyber training programs and new market opportunities
News Hack the Box has grown significantly in the last two years, and it shows no signs of slowing down
-
Foreign AI model launches may have improved trust in US AI developers, says Mandiant CTO – as he warns Chinese cyber attacks are at an “unprecedented level”
News Concerns about enterprise AI deployments have faded due to greater understanding of the technology and negative examples in the international community, according to Mandiant CTO Charles Carmakal.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors