Practical AI: the age of agentic AI

Agentic AI marks a new phase of automation – but only if systems are implemented with proper governance

An abstract CGI render showing a multicolored computer chip with an outline of a human head and the word 'AI agent' on it.
(Image credit: Getty Images)

As AI adoption enters its next phase, businesses are looking beyond copilots and conversational assistants toward a more transformative concept: agentic AI. These autonomous systems don’t just respond but proactively complete tasks according to changing variables. They can plan, reason, and execute tasks across multiple applications and data sources, taking AI from assistant to autonomous collaborator.

Yet amid all the excitement, the mood in the industry is mixed. Some leaders see agentic AI as the dawn of true digital autonomy, while others urge caution, warning that the road from demonstration to dependable enterprise deployment is longer than it looks.

“Optimism is driven by agentic AI’s proactive capabilities,” says Tolga Kurtoglu, chief technology officer at Lenovo. “We’re seeing agents that can complete multi-step tasks, understand user intent, and orchestrate across systems. That’s a leap beyond generative AI. But skepticism is natural – integrating multiple data sources while maintaining governance is complex, and failure rates remain high.”

From copilots to cognitive orchestrators

For most organizations, the shift from copilots to AI agents represents a profound architectural evolution. Instead of supporting humans in narrow tasks, agentic systems coordinate across entire workflows.

At Lenovo, this transformation is already underway. “We’re building AI Super Agents, cognitive operating systems that direct multiple domain-specific agents,” Kurtoglu explains. “These agents don’t just automate tasks; they interpret complex signals, make decisions, and evolve through self-learning.”

Imagine a global supply chain: an AI Super Agent could detect disruptions, reassign tasks, and optimize dependencies in real time. “That’s not just automation,” Kurtoglu adds. “It’s intelligent, dynamic execution.”

The concept of orchestration is emerging across sectors. Alfred Obereder, partner at BearingPoint, says multi-agent systems are the real breakthrough. “We’ll soon see agents interacting with other agents to enable end-to-end automation,” he tells ITPro. “For example, one agent could interview process owners and summarize workflows, while another converts those summaries into process diagrams and optimizes them. This is how agentic AI will scale real business impact.”

These advances explain why 31% of UK organizations are already piloting agentic systems, with 12% scaling them enterprise-wide, according to BearingPoint’s latest research. The same study found that 51% expect efficiency gains and 42% anticipate new revenue streams from agentic architectures within the next five years.

Still, Obereder emphasizes that success depends on control. “Governance must now become a core pillar of enterprise technology strategy,” he says. “Tools like Microsoft Copilot have made it easy for ‘citizen developers’ to build their own agents – but governance and oversight are non-negotiable.”

That balance between empowerment and accountability will define the next phase of AI maturity.

Data, trust, and the reality check

The road to agentic intelligence runs through data integrity. For all the technological excitement, many executives are discovering that their data foundations simply aren’t ready.

“It’s difficult to predict exactly where the long-term value will accrue,” says Josh Rogers, CEO at Precisely. “But the trajectory is unmistakably forward. The real barrier isn’t the technology, it’s trust in the data powering it.”

Rogers explains that only 12% of organizations report their data is of sufficient quality for effective AI. “As long as companies continue to use low-integrity data, they’ll continue to generate unreliable outputs,” he says. “Transparency, explainability, and observability are not optional – they’re business imperatives.”

Siddharth Rajagopal, chief architect for EMEA-LATAM at Informatica, agrees. “Data is both the fuel and the bottleneck,” he tells ITPro. “To move forward safely, businesses need a trust-by-design model that integrates data quality, ownership, and governance into a centralized agentic AI hub.”

The evolution from static to self-improving intelligence will only redefine how organizations operate if they can balance autonomy with accountability.

“Autonomy must be matched with explainability and security,” Kurtoglu emphasizes. “We evaluate every AI solution against privacy, reliability, transparency, and ethical impact. Without those safeguards, agentic AI risks becoming a black box, powerful but opaque.”

Building trustworthy agents: governance, guardrails, and human oversight

As agentic systems gain autonomy, the call for rigorous governance grows louder. Nell Watson, AI ethicist and president at the European Responsible AI Office, says that capability and trustworthiness must advance together.

“Agents can decompose complex goals, adapt to unexpected conditions, and orchestrate multi-step processes across systems without constant supervision,” she explains. “They’re becoming junior colleagues rather than sophisticated apps. But they’re also brittle, exploitable, and prone to catastrophic failures that current frameworks weren’t designed to handle.”

Watson warns that many agents deployed today are vulnerable to jailbreaking and prompt injection – attacks that manipulate behavior through malicious inputs. “Without architectural separation between trusted instructions and external content, these agents can be tricked into executing unauthorized actions or leaking sensitive data,” she says.

To counter this, Watson advocates for “tamper-evident audit trails” and provenance systems that make agent decision-making transparent. “Organizations must be able to trace why an agent made a decision: what data informed it, which alternatives it considered, and how it reached its conclusion,” she explains. “That’s essential for oversight and accountability.”

Security isn’t the only concern. Benjamin Brial, founder of Cycloid.io, says too many companies are still chasing “autonomous” AI without understanding the trade-offs. “Optimism comes from possibility; skepticism comes from experience,” he says. “People see demos of agents writing code or fixing workflows like magic – but those demos exist in perfect conditions. Real enterprises are messy.”

Brial tells ITPro that the biggest risk is assuming autonomy equals trust. “It doesn’t,” he says. “We need sober AI, bounded systems with defined permissions and human approval. There isn’t going to be a super-agent running your business; there will be practical, assistive tools that genuinely improve how people work day to day.”

That pragmatic view is echoed by Efrain Ruh, continental CTO, Europe at Digitate. “Only 15% of IT leaders are currently considering or piloting fully autonomous agents,” he says. “That’s because governance, hallucination control, and security remain major challenges.”

Ruh emphasizes that narrowing an agent’s scope is key. “Define its purpose, validate its data, and keep humans in the loop,” he explained. “Reliable guardrails minimize risk and make autonomy safe rather than reckless.”

The next evolution: when AI becomes infrastructure

Despite today’s growing pains, the direction of travel is clear. Gartner predicts that by 2028, one-third of enterprise software will include agentic capabilities, and 15% of day-to-day work decisions will be made autonomously.

Peter van der Putten, director at Pega AI Lab and assistant professor at Leiden University, says this marks the beginning of “AI as actionable intelligence”.

“Agents can drive outcomes independently,” he tells ITPro, “but they must be embedded into workflows, journeys, and governance ecosystems. Otherwise, they’re just clever add-ons.”

In addition, Van der Putten points to “design-time agents” as an emerging category with immediate business value. “These agents assist in building workflows and applications in low-code environments,” he explains. “They empower non-technical experts to create new systems safely, bridging the gap between legacy infrastructure and modern AI-driven operations.”

He adds that the key differentiator between hype and genuine progress is predictability. “You don’t need an army of agents unleashed on the enterprise,” he says. “You need predictable systems that deliver repeatable outcomes, combined with transparency so you can see what they’re doing, and why.”

In the workforce, that will mean a shift in what productivity looks like. As Watson notes, “Employees will move from operating tools to orchestrating agents.” The most valuable skills will be defining objectives, evaluating agent output, and intervening when needed. “Technical proficiency becomes less important than judgment and oversight,” she emphasizes.

At SmartRecruiters, CEO Rebecca Carr sees this shift in real time. “Agentic AI is cutting hours of coordination into minutes,” she says. “Our embedded agent Winston automates screening and scheduling, but the real change is cultural, people can focus on relationships and strategy instead of administration.”

Carr adds that trust remains central. “In hiring, transparency isn’t optional,” she explains. “Leaders need to understand how an agent forms recommendations and how it responds when conditions change. The goal isn’t perfect AI, it’s dependable systems that can explain their reasoning, recover safely from mistakes, and keep learning.”

From experimentation to enterprise core

Looking three to five years ahead, experts agree that the defining milestone for agentic AI will be its integration into enterprise governance and measurement frameworks.

Obereder from BearingPoint says it will be clear an organization has crossed the threshold “when agent performance is measured and reported alongside core business KPIs”. Rajagopal predicts that by the late 2020s, “agents will not only execute decisions but also propose new semantic definitions, expanding enterprise knowledge graphs and contributing to shared context layers.”

Watson believes the final signal of maturity will be standardization. “We’ll see safety documentation, audit procedures, and interoperability protocols emerge—just like we have in networking or data security,” she says. “When agent deployment goes through the same procurement, compliance, and risk processes as ERP or cloud infrastructure, that’s when it becomes core.”

The rise of agentic AI is, in many ways, a test of enterprise discipline. The technology’s potential is vast – but as Benjamin Brial emphasizes, “the issue isn’t innovation, it’s alignment between ambition and reality.”

For now, success lies not in unleashing armies of agents but in deploying a few that are reliable, auditable, and aligned with human judgment. The age of agentic AI isn’t about replacing people, but reengineering how people and intelligent systems collaborate to think, decide, and act together.

From the first real-world applications of AI to copilots and now agentic AI, the story of AI’s payoff is one of evolution through collaboration. The next frontier will not be defined by machines working alone, but by humans and AI systems building shared intelligence together.

David Howell is a freelance writer, journalist, broadcaster and content creator helping enterprises communicate.

Focussing on business and technology, he has a particular interest in how enterprises are using technology to connect with their customers using AI, VR and mobile innovation.

His work over the past 30 years has appeared in the national press and a diverse range of business and technology publications. You can follow David on LinkedIn.