Adobe clarifies new terms of service after AI model training concerns – but it still might not cut it with users

Adobe logo pictured on a smartphone with logo and branding pictured in background.
(Image credit: Getty Images)

Adobe has tightened up its terms of use policy after outraged users complained that the firm was training its AI model Firefly on their content.

The firm initially stated in a since-updated policy that it may “access your content through both automated and manual methods, such as for content review,” directing users to a section on its machine learning practices.

This sparked fear in Adobe’s user community, forcing the creative software provider to respond with a clearer declaration of its content use policy and AI training techniques in a blog post.

“We remain committed to transparency, protecting the rights of creators and enabling our customers to do their best work,” the firm stated. “Our commitments to our customers have not changed”. 

In the first instance, Adobe made clear that it does not train Firefly, the firm’s generative AI platform, on customer content, but rather on other forms of content that fall within legal guidelines. 

“Firefly generative AI models are trained on a dataset of licensed content, such as Adobe Stock, and public domain content where copyright has expired,” Adobe stated.

The firm was also clear on ownership, stating that it would “never assume” ownership of a customer’s work and that its role as a platform is to host content to enable the use of the application.   

“Customers own their content and Adobe does not assume any ownership of customer work,” the firm stated. 

As part of the clarification, Adobe included its previous terms of use policy with the firm’s updated wording overlaid, showing that it had only altered a handful of sentences. 

“We appreciate our customers who reached out to ask these questions, which has given us an opportunity to clarify our terms and our commitments. We will be clarifying the Terms of Use acceptance customers see when opening applications,” the firm concluded.

Adobe’s CPO, Scott Belsky, agreed that the wording used by the firm was “unclear” in a post to X, though he was careful to say that Adobe has had a similar stipulation in its terms of service “for over a decade”.  

“But trust and transparency couldn’t be more crucial these days, and we need to be clear when it comes to summarizing terms of service in these pop-ups,” he added. 

Adobe faced user outrage 

Before Adobe clarified its position, many users were shocked at the thought of the firm training its models on proprietary content, and some called for Adobe users to quit the platform entirely.

RELATED WHITEPAPER

“If you are a professional, if you are under NDA with your clients, if you are a creative, a lawyer, a doctor or anyone who works with proprietary files - it is time to cancel Adobe, delete all the apps and programs. Adobe can not be trusted,” said one X user.

Another tagged Adobe and Photoshop in a similar post, asking exasperatedly if they were correct in thinking that they couldn’t “use Photoshop unless I'm okay with you having full access to anything I create with it, INCLUDING NDA work?”

The situation bears resemblance to a similar fiasco at Slack, in which users were angered to discover wording in Slack’s policy that suggested the firm was training models on customer data.

Slack has since updated its policy and made clear that it does not train AI models on customer data, with the firm’s CCO telling ITPro that the company has been clear with customers that this is the case. 

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.