Microsoft dismisses claims it’s using Word and Excel data to train AI
Reports circulated from users that the firm had quietly introduced an opt-out feature on its training policy
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Microsoft has dismissed claims circulating online that it uses customer data to train its AI models, making it the latest firm forced to publicly clarify its AI policy.
A blog post written by author Casey Lawrence initially voiced concerns, suggesting that Microsoft had implemented an ‘opt-out’ feature that, left unchecked, would allow the firm to use customer data in AI training.
“Microsoft Office, like many companies in recent months, has slyly turned on an ‘opt-out’ feature that scrapes your Word and Excel documents to train its internal AI systems,” Lawrence said.
Lawrence warned against anyone using Word documents to write proprietary content, saying that they should ensure the ‘opt-out’ feature is selected. The blog includes instructions on how to opt out of the AI training policy.
Users on social media voiced similar concerns, with one popular tech account, nixCraft, posting a screenshot of Lawrence’s blog to X with a quoted portion of the blog’s text.
Microsoft has since denied these circulating claims, responding on social media by posting a rebuttal of the AI training accusations to its Microsoft 365 X account.
“In the M365 apps, we do not use customer data to train LLMs. This setting only enables features requiring internet access like co-authoring a document,” Microsoft said.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Wary customers
This marks the latest in a series of spats between big tech firms and customers over alleged AI training policies, with both Slack and Adobe recently caught in the crosshairs over similar features.
In May, Slack was forced to update the language of its training policy to allay confusion among users, confirming that it uses some customer data to develop “non-generative AI/ML models.”
Slack said users could opt out if they didn’t want their data used in these models, though many rallied against the firm and the automatic opt-in nature of the policy.
The firm learned its lesson from the training fiasco, though. One company exec told ITPro it had been busy engaging with customers to clarify its AI training policies.
RELATED WEBINAR
Adobe had a similar issue in June when users complained the firm was training its AI model Firefly on customer content. Like Slack, Adobe updated its policy and sought to assure customers that it would never assume ownership of an individual’s work.
The firm even faced backlash from its own staff, with screenshots from an internal comms channel showing employees complaining about the firm’s poor communication and badly handled response.

George Fitzmaurice is a former Staff Writer at ITPro and ChannelPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.
-
AutoCAD Users may have a ransomware problem – here's what they can doIn-depth A new malware family is currently using the same file types as the professional design software AutoCAD
-
Google Workspace just got a huge Gemini updateNews Google is targeting deeper Gemini integration across a range of Workspace applications
-
Microsoft has a new AI poster child in Anthropic – and it’s about timeOpinion Microsoft is cosying up to Anthropic at a crucial time in the race to deliver on AI promises
-
Anthropic's Claude Cowork tool is coming to Microsoft CopilotNews The new Copilot Cowork tool will be made available through a new Microsoft 365 tier at the end of March
-
Microsoft Copilot bug saw AI snoop on confidential emails — after it was told not toNews The Copilot bug meant an AI summarizing tool accessed messages in the Sent and Draft folders, dodging policy rules
-
If Satya Nadella wants us to take AI seriously, let’s forget about mass adoption and start with a return on investment for those already using itOpinion The Microsoft chief said there’s a risk public sentiment might sour unless adoption is distributed more evenly
-
Satya Nadella says a 'telltale sign' of an AI bubble is if it only benefits tech companies – but the technology is now having a huge impact in a range of industriesNews Microsoft CEO Satya Nadella appears confident that the AI market isn’t in the midst of a bubble, but warned widespread adoption outside of the technology industry will be key to calming concerns.
-
Microsoft CEO Satya Nadella wants an end to the term ‘AI slop’ and says 2026 will be a ‘pivotal year’ for the technology – but enterprises still need to iron out key lingering issuesNews Microsoft CEO Satya Nadella might want the term "AI slop" shelved in 2026, but businesses will still be dealing with increasing output problems and poor returns.
-
Microsoft quietly launches Fara-7B, a new 'agentic' small language model that lives on your PC — and it’s more powerful than GPT-4oNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Microsoft is hell-bent on making Windows an ‘agentic OS’ – forgive me if I don’t want inescapable AI features shoehorned into every part of the operating systemOpinion We don’t need an ‘agentic OS’ filled with pointless features, we need an operating system that works