Microsoft Copilot bug saw AI snoop on confidential emails — after it was told not to
The Copilot bug meant an AI summarizing tool accessed messages in the Sent and Draft folders, dodging policy rules
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Microsoft's Copilot has been found reading and summarizing email messages despite "confidential" labels that should prevent the AI system from accessing the data.
The tech giant issued a warning about a bug in the Microsoft 365 Copilot "work tab" Chat which allows the AI to incorrectly process messages that should be skipped due to sensitivity labels.
In a message shared to affected users, Microsoft said a code issue meant emails in the sent items and draft folders were being picked up despite policies in place that meant messages with confidential labels shouldn't be read.
"We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop," a spokesperson told ITPro.
"This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access".
The spokesperson added that a "configuration update" has been deployed for customers globally.
The issue was first spotted on 21 January, and tracked by Microsoft as CW1226324.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Microsoft Copilot Chat rules
Copilot Chat is Microsoft's tool for interacting with an AI agent directly from Word and other productivity software. It first rolled out in September.
Microsoft 365 Copilot reads through data such as emails, documents, chats, and more to help dig information out for users.
With privacy in mind, Microsoft built in administrative controls that let companies keep AI away from sensitive material — but this bug meant those rules were not applied in Sent Items and Drafts folders in email, letting Copilot access all emails for summarization despite being labelled confidential.
AI security risks
The rise of generative AI use in businesses has sparked concerns about the security risks, be it breaching confidentiality guidelines in sensitive industries, leaking private data, or offering a new attack vector via prompt injections or other hacking techniques.
Researchers have already spotted thousands of corporate secrets in one popular AI training dataset, suggesting industry is struggling to keep up with the realities of data security in the AI era.
The risk is exacerbated by shadow AI, when employees use AI chatbots or other tools without official approval or IT department support, meaning data-protection guidelines aren't in place to protect private or sensitive information.
That's already causing a huge surge in data policy violations, according to a report from Netskope, with almost a third of workers already using AI covertly at work.
There have been previous issues with Copilot. Back in 2024, academic researchers spotted security vulnerabilities in retrieval augmented generation (RAG) systems used by Microsoft Copilot that could lead to such tools committing confidentiality violations.
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Arm’s new CPU represents a major shift for the AI data center market – what does it mean for UK tech?Analysis With established expertise and an open approach, Arm could capture rising demand for CPUs
-
Why leaders need to build resilience to avoid AI burnoutIn-depth Stress levels are surging among those in leadership roles due to accelerating AI adoption – resilience is key to avoiding burnout
-
Satya Nadella needs to remember the Streisand effect for 'AI slop'Opinion Attempts to discourage criticism may backfire for Microsoft’s CEO
-
Microsoft has a new AI poster child in Anthropic – and it’s about timeOpinion Microsoft is cosying up to Anthropic at a crucial time in the race to deliver on AI promises
-
Anthropic's Claude Cowork tool is coming to Microsoft CopilotNews The new Copilot Cowork tool will be made available through a new Microsoft 365 tier at the end of March
-
If Satya Nadella wants us to take AI seriously, let’s forget about mass adoption and start with a return on investment for those already using itOpinion The Microsoft chief said there’s a risk public sentiment might sour unless adoption is distributed more evenly
-
Satya Nadella says a 'telltale sign' of an AI bubble is if it only benefits tech companies – but the technology is now having a huge impact in a range of industriesNews Microsoft CEO Satya Nadella appears confident that the AI market isn’t in the midst of a bubble, but warned widespread adoption outside of the technology industry will be key to calming concerns.
-
Microsoft CEO Satya Nadella wants an end to the term ‘AI slop’ and says 2026 will be a ‘pivotal year’ for the technology – but enterprises still need to iron out key lingering issuesNews Microsoft CEO Satya Nadella might want the term "AI slop" shelved in 2026, but businesses will still be dealing with increasing output problems and poor returns.
-
Microsoft quietly launches Fara-7B, a new 'agentic' small language model that lives on your PC — and it’s more powerful than GPT-4oNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Microsoft is hell-bent on making Windows an ‘agentic OS’ – forgive me if I don’t want inescapable AI features shoehorned into every part of the operating systemOpinion We don’t need an ‘agentic OS’ filled with pointless features, we need an operating system that works
