How AI-enhanced security features are evolving on modern devices

The world of cyber threats is ever growing, but AI can help

A man with his back to the camera working across three screens, all of which show different types of data modelling
(Image credit: Getty Images/Datacom)

As artificial intelligence (AI) becomes deeply embedded in everyday computing, modern devices – especially those running Windows 11 – are rapidly evolving to harness AI’s potential safely and effectively. With AI capabilities increasingly integrated directly into device chips, often referred to as edge AI, users can experience faster and more secure tools without constant reliance on cloud connectivity.

At the same time, this technological leap ushers in complex cybersecurity challenges like shadow AI: the unsanctioned, unmanaged use of AI tools that can inadvertently expose sensitive data. Datacom’s recently commissioned Australian workplace research – based on a survey of more than 2000 employers and employees from across Australia – provides useful context into how the convergence of AI and hardware-level security can shape the future of cybersecurity on modern devices, and why the enforced transition to Windows 11 presents an ideal opportunity for organisations to prepare.

The rise of AI at the edge and its security implications

Edge AI is the embedding of AI capabilities directly into device chips, allowing data processing to occur locally rather than in the cloud. This shift offers significant productivity benefits as well as important security and privacy advantages.

Edge AI powers many Windows 11 features such as Recall – a digital memory of user activity that can streamline tasks like report writing – as well as Live Captions, real-time translation and advanced content creation tools like Cocreator in Paint.

Datacom’s research found that 50% of surveyed employees use AI features at work, and 91% of employers say they encourage employees to use AI for regular work tasks. The business case for using AI is clear, with 74% of AI users citing time-saving benefits and 56% noting increased productivity.

Elevated cybersecurity risks from unmanaged AI usage

While AI offers great promise, unmanaged usage of public AI platforms at work poses significant risks.

David Stafford-Gaffney, Datacom’s Associate Director of Cybersecurity, warns: “Users may be unknowingly uploading or exposing sensitive data to these AI platforms [or] training AI models with corporate information.”

Examples include users integrating ChatGPT into Microsoft PowerPoint presentations or linking ChatGPT to their OneDrive accounts, creating potential data exposure without IT oversight.

Stafford-Gaffney notes that shadow AI today parallels the cloud storage misconfigurations from early cloud adoption days that precipitated major data breaches.

“We're at that same nexus with shadow AI where we need to start thinking about how we anticipate and manage AI usage and awareness,” he explains.

To counteract these risks, he advocates instilling a ‘healthy paranoia’ among employees to improve cyber vigilance. He recommends asking simple but critical questions: “Am I expecting this email? Does the sender match what I know?”

Security experts also caution that prevention technologies alone cannot stop all AI-driven cyber threats. Instead, organisations must invest heavily in advanced detection tools to reduce the time it takes to identify and respond to attacks.

“It’s not a silver bullet. We need to lift our focus on detection [because] prevention isn’t going to prevent everything; AI-powered attacks [will] get through,” says Stafford-Gaffney.

Silicon-level security: The new frontier in cyber defence

At the heart of modern device security evolution is ‘security in silicon’, which involves embedding hardware-level security features directly into chip architectures.

This technology, such as Intel® Threat Detection Technology (TDT), uses AI and CPU telemetry to monitor running processes, identifying anomalies associated with ransomware, cryptojacking and software supply chain attacks.

Hardware-embedded detection like this complements user education and organisational policy measures, creating a multilayered defence strategy. While 90% of employees feel their work devices are secure, gaps remain with only 72% reporting that they have up-to-date encryption. Endpoint protection is also not universal, highlighting areas for improvement as AI adoption grows.

Best practices for embracing AI-enhanced security features

To fully leverage AI's benefits while mitigating risks, organisations should:

  • Implement robust detection tools that complement preventive cybersecurity measures.
  • Provide user training to build awareness of shadow AI risks and foster vigilant online behaviour.
  • Establish AI governance policies to monitor and manage workplace AI usage proactively.
  • Address digital infrastructure frustrations, such as poor device performance and outdated software that hinder AI adoption.

By combining hardware-enabled security, organisational governance and empowered users, businesses can ensure they remain resilient in the evolving threat landscape.

AI-enhanced security features embedded at the silicon level are transforming modern devices into intelligent, efficient and secure tools. Windows 11 provides an ideal platform to embrace this evolution, enabling organisations to harness edge AI’s productivity gains without compromising privacy and security.

The rise of shadow AI and AI-driven cyber-attacks demand a balanced approach, combining technological advances with vigilant, well-informed users. With timely action, especially driven by structured transitions like Datacom’s Windows 11 migration framework, organisations can confidently navigate the AI-enabled future, turning potential vulnerabilities into strategic advantages.

To learn more about how Datacom and Intel can help you deploy secure Intel-powered devices, visit the partner web page.