Organizations around the world are unprepared for the threat from bad bots – and UK businesses are some of the worst performers

As AI-driven bot traffic booms, legacy defenses are failing fast

A finger pointing to a red bot with alerts
(Image credit: Getty Images)

The UK is one of the world's worst performers when it comes to protecting against bots – though most countries are pretty poor.

That's according to DataDome, which states that only 1.8% of large UK domains are fully protected against bots, compared with a Europe-wide average of 2.5% and a global average of 2.8%. Bigger organizations are no better than smaller ones, with only 2% of domains with more than 30 million monthly visits fully protected.

And it's an increasing issue. AI-driven bot traffic is becoming the norm, said DataDome, with AI bot and crawler traffic having risen from 2.6% of verified bot traffic in January this year to more than 10.1% by August. The company said it detected nearly 1.7 billion requests from OpenAI crawlers alone in a single month – scraping web content, draining server resources, and exposing proprietary data.

30% off Keeper Security's Business Starter and Business plans

30% off Keeper Security's Business Starter and Business plans

Keeper Security is trusted and valued by thousands of businesses and millions of employees. Why not join them and protect your most important assets while taking advantage of this special offer?

Earlier this year, the National Cyber Security Centre (NCSC) warned of a developing digital divide between systems keeping pace with AI-enabled threats and those that are more vulnerable.

"Our data on low bot protection rates provides concrete evidence that this is already happening, proving that many UK businesses are ill-equipped for the next wave of AI-driven cyberattacks," said Jérôme Segura, VP of threat research at DataDome.

"This gap is not accidental, but is rooted in systemic national issues: an economy dominated by resource-strapped small and medium-sized enterprises, a persistent and critical cybersecurity skills gap, and a business culture that frequently underinvests in security."

Businesses aren't doing a great job of countering the problem, the researchers said. Legacy defenses are failing fast, with only 2.8% of websites fully protected in 2025, down from 8.4% in 2024.

While nearly nine-in-ten domains disallow GPTBot in their robots.txt files, AI-powered crawlers and browsers ignore these directives, rendering static blocking strategies obsolete. Meanwhile, anti-fingerprinting bots were only blocked by around 7% of websites, and fake Chrome and curl bots were detected just 21% of the time.

The weakest protection was seen in the government, non-profit, and telecoms sectors, while travel and hospitality, gambling, and real estate were the best-performing.

Threat actors, said DataDome, are increasingly blending basic automation with more advanced tactics such as leveraging agentic AI tools that simulate human behavior, bypass Captchas, manipulate TLS fingerprints, and adapt in real time.

"AI agents are rewriting the rules of online engagement," said Segura. "Traditional defenses, built to spot static automation, are collapsing under this complexity. Businesses can't tell if the AI traffic they're seeing is good or bad, which leaves them both exposed to fraud and blind to opportunity. What's needed is adaptive, intent-based protection that can make sense of this AI-driven chaos in real time."

Earlier this year, research from Imperva revealed that bot traffic now accounts for more than half of all web traffic, with malicious bots now making up 37% of all internet traffic.

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

TOPICS
Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.