Researchers sound alarm over AI hardware vulnerabilities that expose training data
Hackers can abuse flaws in AI accelerators to break AI privacy – and a reliable fix could be years away


AI researchers have warned that the data privacy of AI models is at risk due to a novel hardware vulnerability in commonly-used components.
Researchers at North Carolina State University have discovered a fundamental flaw in the way machine learning (ML) accelerators handle data on which an AI model was trained, which they’ve dubbed GATEBLEED.
ML accelerators are commonly used across consumer and business-grade devices, such as the neural processing units (NPUs) increasingly used within AI PCs, or specialized additions to central processing units (CPUs) and graphics processing units (GPUs).
They’re intended to make AI functions faster to execute on devices, while also reducing the amount of energy required to process data using AI algorithms.
But researchers have discovered that ML accelerators may contain a fundamental flaw in the way they handle power supply while processing AI tasks that exposes an AI model’s training data.
“Chips are designed in such a way that they power up different segments of the chip depending on their usage and demand to conserve energy,” said Darsh Asher, co-author of the paper exposing GATEBLEED and PhD student at NC State.
“This phenomenon is known as power gating and is the root cause of this attack. Almost every major company implements power gating in different parts of their CPUs to gain a competitive advantage.”
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
This fluctuation was measurably different when the ML accelerator processed data on which the AI model it is running was trained versus when it isn’t, researchers found, allowing attackers to indirectly access privileged information.
“So if you plug data into a server that uses an AI accelerator to run an AI system, we can tell whether the system was trained on that data by observing fluctuations in the AI accelerator usage,” said Azam Ghanbari, co-author of the paper and PhD student at NC State.
“And we found a way to monitor accelerator usage using a custom program that requires no permissions.”
Discovering the data used to train specific AI models exposes models to potential jailbreaking or poisoning. The paper warns that mixture of experts (MoE) models and AI agents could be put at particular risk via the attack.
Because the vulnerability is the result of a hardware rather than software flaw, it cannot be easily patched. The paper’s authors warned it could take years for a redesign to be found and rolled out across CPUs.
Attacks against hardware, they said, can circumvent all encryption, sandboxing, or privilege controls that cybersecurity teams can impose.
A temporary fix could include defenses run at the operating system (OS) level, but the authors warned that this would come with a negative impact on performance and energy use.
For the purposes of the research, the authors tested the vulnerabilities against Intel Advanced Matrix Extensions (AMX), which work as an AI accelerator on a 4th generation Intel Xeon Scalable CPU.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.