How AI agents are being deployed in the real world
These intelligent systems, capable of independent decision-making and learning, are transforming how organisations detect, respond to, and manage security incidents


The integration of AI agents into real-world applications is rapidly advancing, particularly in the security field, where the sheer volume of threats, with eight billion records compromised in 2023 alone, necessitates more autonomous solutions.
These agents, empowered by agentic AI principles where AI's ability to independently handle tasks is reportedly doubling every seven months, offer hugely transformative benefits. They enhance the ability to detect, respond to, and mitigate potential threats with unprecedented efficiency. Unlike traditional automated systems, AI agents possess the autonomy to learn, adapt, and act based on their observations, making them invaluable in the modern security landscape.
Understanding agentic AI and security-focused AI agents
At its core, agentic AI refers to AI systems that can operate with a degree of autonomy. These systems are designed to perceive their environment, make decisions, and take actions to achieve specific goals without constant human intervention.
They can learn from interactions, adapt to new information, and proactively pursue objectives. This is a significant step beyond traditional AI models that might excel at pattern recognition or prediction but require human direction to act on those insights. As EC-Council University notes, an Intelligent Agent, whether hardware or software, "is designed to optimize the probability of accomplishing a defined objective through its capacity to observe, learn, and make informed decisions."
In the security field, an AI agent is software that applies agentic AI principles to execute protective tasks. It combines machine learning, natural language processing, and reasoning to interpret context, detect security events, and perform containment or remediation actions. These agents can find vulnerabilities in complex code structures, identify irregularities in user login patterns, and recognize new types of malware that skirt traditional detection methods.
Conversely, not all automated security tools are AI agents. Systems that block known malicious IPs based on a static list or deploy updates on a schedule without dynamic decision-making aren't AI agents. The key difference is an AI agent's capacity for autonomous decision-making, learning, and goal-oriented actions.
AI agents in action: Bolstering organizational defences
The practical application of AI agents in security is moving from theoretical discussions to tangible deployments, offering significant benefits across several facets of an organization's defensive strategy.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
One primary area is proactive threat detection and response. AI agents continuously monitor data from networks, endpoints, and cloud services, identifying anomalies faster than human teams. For example, they can detect unusual data access patterns indicative of insider risks or unauthorised access attempts, isolate affected endpoints, and alert security analysts with summaries of the events and actions taken.
Additionally, AI agents contribute to intelligent vulnerability management. They correlate vulnerability information with asset criticality and threat intelligence, enabling more effective prioritisation of remediation efforts. Advanced agents might even test potential fixes in sandboxed environments before recommending or applying them.
AI agents also enhance incident investigation and analysis. They quickly gather relevant data from multiple sources, correlate alerts, reconstruct timelines of hostile actions, and provide contextual information. This reduces the manual effort involved in investigations, allowing human experts to focus on strategic decision-making and complex problem-solving.
Finally, AI agents are increasingly capable of automated remediation and containment. In predefined scenarios, they can autonomously contain security events by isolating compromised systems, blocking malicious processes, or revoking compromised credentials.
This capability is bolstered by growing confidence in AI's role; for instance, a recent Microsoft Security report, 'Securing the AI-Powered Enterprise,' found that 47% of current users of AI for security are 'very confident' in its ability to make critical security decisions in high-stakes scenarios. The speed of these automated responses can be key to mitigating the impact of security incidents.
Real-world examples: Highlighting the value
While the field is evolving, several platforms are highlighting the power of AI agents. Microsoft Security Copilot is a prominent example, designed as an AI-powered security analysis tool. It leverages large language models (LLMs) and Microsoft's threat intelligence to help security professionals understand and respond to security events more effectively. It can summarise complex alerts, guide analysts through investigation steps, and even help draft incident reports.
Its agent-like capabilities lie in its ability to process natural language queries, synthesise information from multiple security tools, and propose courses of action. Discussing tools like this, PwC Australia notes that "AI is revolutionising security operations by enhancing threat detection and response," and that such tools allow security analysts to "spend more time on more complex, higher-risk tasks where human judgement is required."
Beyond specific branded solutions, the principles of AI agents are being embedded into broader security platforms. For example, Extended Detection and Response (XDR) platforms increasingly incorporate AI to correlate signals across different security layers, such as endpoints, networks, cloud environments, and email systems, and automate response actions. These systems often feature agent-like components that operate on endpoints to gather data and execute commands.
Similarly, Security Orchestration, Automation, and Response (SOAR) tools are also evolving to include more sophisticated AI-driven decision-making. This moves them beyond predefined playbooks to more adaptive responses based on real-time analysis. While some SOAR capabilities are being absorbed into broader platforms, the drive for intelligent automation persists.
Specialized AI firms are also creating autonomous systems for specific use cases. Deception technology is one such area, where AI agents manage dynamic decoy environments to trap and analyse those with malicious intent, gathering intelligence without posing any risk to real-world systems.
These AI-driven approaches enhance speed and accuracy in identifying security events, reduce false positives, allow scalability of security operations without needing more human resources, and enable security teams to focus on strategic tasks.
The path forward
The deployment of AI agents in security is not a panacea. Considerations around oversight, ethical implications, the potential for adversarial manipulation of AI systems themselves, and ensuring data privacy are crucial. Retsef Levi, Professor of Operations Management at the MIT Sloan School of Management, cautioned about the “Very real risk of creating complex systems with opaque operational boundaries and eroded human capabilities that are prone to major disasters and are not resilient."
However, their potential to significantly enhance security teams is undeniable. As AI technologies mature, they will become essential in modern enterprises, constantly working to anticipate, identify, and neutralize security issues.
The goal is to create a symbiotic relationship where AI handles routine tasks and data analysis, while human experts provide strategic oversight for complex security issues.
Rene Millman is a freelance writer and broadcaster who covers cybersecurity, AI, IoT, and the cloud. He also works as a contributing analyst at GigaOm and has previously worked as an analyst for Gartner covering the infrastructure market. He has made numerous television appearances to give his views and expertise on technology trends and companies that affect and shape our lives. You can follow Rene Millman on Twitter.
-
Asus routers at risk from backdoor vulnerability
News Thousands of devices have been compromised, claims GreyNoise
-
Check Point bolsters attack surface protection with Veriti acquisition
News Veriti’s preemptive threat management exposure capabilities will be integrated into Check Point’s Infinity Platform
-
Agentic AI is coming for customer service jobs
News A report from Cisco shows the agentic AI shift is coming “faster than anyone anticipated”
-
SAS wants its AI agents to supercharge workers, not replace them
SAS has announced a new agentic AI service aimed at helping enterprises deploy agents alongside domain-specific AI models.
-
Optimizing appsec in the technology sector
-
Google Cloud is leaning on all its strengths to support enterprise AI
Analysis Google Cloud made a big statement at its annual conference last week, staking its claim as the go-to provider for enterprise AI adoption.
-
Salesforce wants technicians and tradespeople to take AI agents on the road with them
News Salesforce wants to equip technicians and tradespeople with agentic AI tools to help cut down on cumbersome administrative tasks.
-
Databricks and Anthropic are teaming up on agentic AI development – here’s what it means for customers
News Simplifying agentic AI adoption is the name of the game for Databricks
-
‘DIY’ agent platforms are big tech’s latest gambit to drive AI adoption
Analysis The rise of 'DIY' agentic AI development platforms could enable big tech providers to drive AI adoption rates.
-
Google Cloud announces UK data residency for agentic AI services
News With targeted cloud credits and skills workshops, Google Cloud hopes to underscore its UK infrastructure investment