Microsoft dismisses claims it’s using Word and Excel data to train AI

The Microsoft logo is illuminated on a wall during a Microsoft launch event to introduce the new Microsoft Surface laptop and Windows 10 S operating system, May 2, 2017 in New York City
(Image credit: Getty Images)

Microsoft has dismissed claims circulating online that it uses customer data to train its AI models, making it the latest firm forced to publicly clarify its AI policy.

A blog post written by author Casey Lawrence initially voiced concerns, suggesting that Microsoft had implemented an ‘opt-out’ feature that, left unchecked, would allow the firm to use customer data in AI training.

Microsoft Office, like many companies in recent months, has slyly turned on an ‘opt-out’ feature that scrapes your Word and Excel documents to train its internal AI systems,” Lawrence said.

Lawrence warned against anyone using Word documents to write proprietary content, saying that they should ensure the ‘opt-out’ feature is selected. The blog includes instructions on how to opt out of the AI training policy.

Users on social media voiced similar concerns, with one popular tech account, nixCraft, posting a screenshot of Lawrence’s blog to X with a quoted portion of the blog’s text.

Microsoft has since denied these circulating claims, responding on social media by posting a rebuttal of the AI training accusations to its Microsoft 365 X account.

“In the M365 apps, we do not use customer data to train LLMs. This setting only enables features requiring internet access like co-authoring a document,” Microsoft said.

Wary customers

This marks the latest in a series of spats between big tech firms and customers over alleged AI training policies, with both Slack and Adobe recently caught in the crosshairs over similar features.

In May, Slack was forced to update the language of its training policy to allay confusion among users, confirming that it uses some customer data to develop “non-generative AI/ML models.”

Slack said users could opt out if they didn’t want their data used in these models, though many rallied against the firm and the automatic opt-in nature of the policy.

The firm learned its lesson from the training fiasco, though. One company exec told ITPro it had been busy engaging with customers to clarify its AI training policies.

RELATED WEBINAR

Adobe had a similar issue in June when users complained the firm was training its AI model Firefly on customer content. Like Slack, Adobe updated its policy and sought to assure customers that it would never assume ownership of an individual’s work.

The firm even faced backlash from its own staff, with screenshots from an internal comms channel showing employees complaining about the firm’s poor communication and badly handled response.

TOPICS
George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.