Microsoft Copilot bug saw AI snoop on confidential emails — after it was told not to

The Copilot bug meant an AI summarizing tool accessed messages in the Sent and Draft folders, dodging policy rules

Microsoft 365 Copilot branding pictured on smartphone screen held in hand.
(Image credit: Getty Images)

Microsoft's Copilot has been found reading and summarizing email messages despite "confidential" labels that should prevent the AI system from accessing the data.

The tech giant issued a warning about a bug in the Microsoft 365 Copilot "work tab" Chat which allows the AI to incorrectly process messages that should be skipped due to sensitivity labels.

In a message shared to affected users, Microsoft said a code issue meant emails in the sent items and draft folders were being picked up despite policies in place that meant messages with confidential labels shouldn't be read.

"Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," Microsoft told BleepingComputer.

"The Microsoft 365 Copilot 'work tab' Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured."

The company said it is investigating the source of the flaw and its impact, with a fix already working its way through users' systems since the beginning of February.

Once the solution's rollout was complete, Microsoft said it would reach out to users to ensure the bug was fully remediated. It's unclear how many organizations were affected.

The issue was first spotted on 21 January, and tracked by Microsoft as CW1226324.

Microsoft Copilot Chat rules

Copilot Chat is Microsoft's tool for interacting with an AI agent directly from Word and other productivity software. It first rolled out in September.

Microsoft 365 Copilot reads through data such as emails, documents, chats, and more to help dig information out for users.

With privacy in mind, Microsoft built in administrative controls that let companies keep AI away from sensitive material — but this bug meant those rules were not applied in Sent Items and Drafts folders in email, letting Copilot access all emails for summarization despite being labelled confidential.

AI security risks

The rise of generative AI use in businesses has sparked concerns about the security risks, be it breaching confidentiality guidelines in sensitive industries, leaking private data, or offering a new attack vector via prompt injections or other hacking techniques.

Researchers have already spotted thousands of corporate secrets in one popular AI training dataset, suggesting industry is struggling to keep up with the realities of data security in the AI era.

The risk is exacerbated by shadow AI, when employees use AI chatbots or other tools without official approval or IT department support, meaning data-protection guidelines aren't in place to protect private or sensitive information.

That's already causing a huge surge in data policy violations, according to a report from Netskope, with almost a third of workers already using AI covertly at work.

There have been previous issues with Copilot. Back in 2024, academic researchers spotted security vulnerabilities in retrieval augmented generation (RAG) systems used by Microsoft Copilot that could lead to such tools committing confidentiality violations.

FOLLOW US ON SOCIAL MEDIA

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.

Nicole the author of a book about the history of technology, The Long History of the Future.