Law enforcement needs to fight fire with fire on AI threats
A report from the UK's national data science institute calls for a new AI-focused taskforce


UK law enforcement agencies have been urged to employ a more proactive approach to AI-related cyber crime as threats posed by the technology accelerate.
The Alan Turing Institute, the UK's national institute for data science and artificial intelligence, is calling for the creation of a dedicated AI Crime Taskforce within the National Crime Agency.
Research from the institute shows there's been a substantial acceleration in AI-enabled crime - particularly financial crime, phishing, and romance scams.
"AI-enabled crime is already causing serious personal and social harm and big financial losses," said Joe Burton, professor of international security at Lancaster University and one of the authors of the report.
"We need to get serious about our response and give law enforcement the necessary tools to actively disrupt criminal groups. If we don’t, we’re set to see the rapid expansion of criminal use of AI technologies."
The report warned that Chinese innovation in frontier AI is having a worrying effect, with criminals exploiting new open weight systems with fewer guardrails to carry out more advanced tasks.
And it's the ability of AI to automate, augment, and rapidly scale the volume of criminal activity that's behind the rise in AI-enabled crime. These tools are being used and shared by state, private sector, and criminal groups, which is prompting a surge in activity..
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"As AI tools continue to advance, criminals and fraudsters will exploit them, challenging law enforcement and making it even more difficult for potential victims to distinguish between what’s real and what’s fake," said Ardi Janjeva, senior research associate at the Alan Turing Institute and a co-author of the report.
"It’s crucial that agencies fighting crime develop effective ways to mitigate this, including combatting AI with AI."
Fighting fire with fire
AI is already being used to identify malicious actors in real time, support the analysis of large volumes of text data, and counter AI-generated deepfakes, phishing scams, and misinformation campaigns, according to the institute.
The authors of the report have called for a sharpened focus on extending this and to equip agencies with more advanced tools.
Central to this is the creation of a new AI Crime Taskforce within the National Crime Agency’s National Cyber Crime Unit, to coordinate a national response to AI-enabled crime.
It should collect data from across UK law enforcement to monitor and log criminal groups’ use of AI, working with national security and industry partners on strategies, and work fast to scale up the adoption of AI tools to disrupt criminal networks proactively.
Cooperation with European and international law enforcement partners needs to be improved. This, the report noted, will ensure there's compatibility in approaches to deterring, disrupting, and pursuing criminal groups leveraging AI.
The report also calls for a new working group in Europol focused on AI-enabled crime.
Law enforcement shouldn't just be mapping AI tools in policing, however. The report suggested agencies should also be logging the tools that are being misused for criminal purposes on a new central database within the proposed AI Crime Taskforce.
All this, for course, will take time, expertise and money, and the report acknowledges that there's a shortage of skills within law enforcement, as well as gaps in regulation.
As a result, enhanced training will be needed to fix that, the authors said.
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Jensen Huang says AI will make us busier – so what’s the point?
Opinion So much for efficiency gains and focusing on the more “rewarding” aspects of your job
-
This DeepSeek-powered pen testing tool could be a Cobalt Strike successor
News ‘Villager’, a tool developed by a China-based red team project known as Cyberspike, is being used to automate attacks under the guise of penetration testing.
-
Hackers are using AI to dissect threat intelligence reports and ‘vibe code’ malware
News TrendMicro has called for caution on how much detail is disclosed in security advisories
-
Anthropic admits hackers have 'weaponized' its tools – and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
News Security experts say Anthropic's recent admission that hackers have "weaponized" its AI tools gives us a terrifying glimpse into the future of cyber crime.
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b model
News Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
Microsoft quietly launched an AI agent that can detect and reverse engineer malware
News Researchers say the tool is already achieving the “gold standard” in malware classification
-
Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
News Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May