Microsoft dismisses claims it’s using Word and Excel data to train AI
Reports circulated from users that the firm had quietly introduced an opt-out feature on its training policy
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Microsoft has dismissed claims circulating online that it uses customer data to train its AI models, making it the latest firm forced to publicly clarify its AI policy.
A blog post written by author Casey Lawrence initially voiced concerns, suggesting that Microsoft had implemented an ‘opt-out’ feature that, left unchecked, would allow the firm to use customer data in AI training.
“Microsoft Office, like many companies in recent months, has slyly turned on an ‘opt-out’ feature that scrapes your Word and Excel documents to train its internal AI systems,” Lawrence said.
Lawrence warned against anyone using Word documents to write proprietary content, saying that they should ensure the ‘opt-out’ feature is selected. The blog includes instructions on how to opt out of the AI training policy.
Users on social media voiced similar concerns, with one popular tech account, nixCraft, posting a screenshot of Lawrence’s blog to X with a quoted portion of the blog’s text.
Microsoft has since denied these circulating claims, responding on social media by posting a rebuttal of the AI training accusations to its Microsoft 365 X account.
“In the M365 apps, we do not use customer data to train LLMs. This setting only enables features requiring internet access like co-authoring a document,” Microsoft said.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Wary customers
This marks the latest in a series of spats between big tech firms and customers over alleged AI training policies, with both Slack and Adobe recently caught in the crosshairs over similar features.
In May, Slack was forced to update the language of its training policy to allay confusion among users, confirming that it uses some customer data to develop “non-generative AI/ML models.”
Slack said users could opt out if they didn’t want their data used in these models, though many rallied against the firm and the automatic opt-in nature of the policy.
The firm learned its lesson from the training fiasco, though. One company exec told ITPro it had been busy engaging with customers to clarify its AI training policies.
RELATED WEBINAR
Adobe had a similar issue in June when users complained the firm was training its AI model Firefly on customer content. Like Slack, Adobe updated its policy and sought to assure customers that it would never assume ownership of an individual’s work.
The firm even faced backlash from its own staff, with screenshots from an internal comms channel showing employees complaining about the firm’s poor communication and badly handled response.

George Fitzmaurice is a former Staff Writer at ITPro and ChannelPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.
-
Cohere's Aleph Alpha merger could create a transatlantic sovereign AI powerhouseAnalysis The merger between Cohere and Aleph Alpha aims to capitalize on the burgeoning sovereign AI market
-
Everything you need to know about OpenAI's new workspace agentsNews New ‘workspace agents’ from OpenAI will automate tasks for workers and can be customized for specific roles
-
'That language is no longer reflective of how Copilot is used today': Microsoft says Copilot isn't just for 'entertainment purposes only'News Sharp-eyed users spotted Microsoft describing its Copilot AI as "for entertainment purposes only"
-
‘Fragmentation is poison’: How Microsoft is targeting disparate data to boost AI adoptionNews Amir Netz, the co-creator of Microsoft's Power BI service, tells ITPro that business context is key to effective AI deployment.
-
Microsoft is rolling out Copilot Cowork to more customersNews Use of Copilot Cowork has been limited to select customers so far
-
Satya Nadella needs to remember the Streisand effect for 'AI slop'Opinion Attempts to discourage criticism may backfire for Microsoft’s CEO
-
Microsoft has a new AI poster child in Anthropic – and it’s about timeOpinion Microsoft is cosying up to Anthropic at a crucial time in the race to deliver on AI promises
-
Anthropic's Claude Cowork tool is coming to Microsoft CopilotNews The new Copilot Cowork tool will be made available through a new Microsoft 365 tier at the end of March
-
Microsoft Copilot bug saw AI snoop on confidential emails — after it was told not toNews The Copilot bug meant an AI summarizing tool accessed messages in the Sent and Draft folders, dodging policy rules
-
If Satya Nadella wants us to take AI seriously, let’s forget about mass adoption and start with a return on investment for those already using itOpinion The Microsoft chief said there’s a risk public sentiment might sour unless adoption is distributed more evenly