LinkedIn faces lawsuit amid claims it shared users' private messages to train AI models
The professional networking app described the allegations as "false claims with no merit"


In a US lawsuit filed on behalf of LinkedIn Premium users, the professional networking app has been accused of using members' private messages to train AI models.
Filed in a California federal court, the lawsuit, on behalf of LinkedIn user Alessandro De La Torre, accuses the company of breaching its contractual promises by disclosing Premium customers' private messages to third parties to train generative AI models.
"Given its role as a professional social media network, these communications include incredibly sensitive and potentially life-altering information about employment, intellectual property, compensation, and other personal matters," the filing reads.
"Microsoft is the parent company of LinkedIn, and Defendant claims it disclosed its users’ data to third-party 'affiliates' within its corporate structure, and in a separate instance, more cryptically to 'another provider'. LinkedIn did not have its Premium customers’ permission to do so."
In a statement given to ITPro, a spokesperson for LinkedIn said: “These are false claims with no merit.”
The story behind the LinkedIn lawsuit
The case hinges on the introduction of a change to LinkedIn's privacy practices last year, whereby users were opted in by default to allow third parties to use their personal data to train AI.
To start with, according to the lawsuit, this was done on the quiet, with the change only appearing in the company's privacy policy in September following a backlash from users and privacy campaigners.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The company exempted customers in Canada, the EU, EEA, the UK, Switzerland, Hong Kong, and Mainland China from the data sharing - but not those in the US.
"Like other features on LinkedIn, when you engage with generative AI powered features we process your interactions with the feature, which may include personal data (e.g., your inputs and resulting outputs, your usage information, your language preference, and any feedback you provide)," the company said in its FAQs.
The change wasn't universally welcomed, however, with the UK’s Information Commissioner’s Office (ICO) noting that the opt-out approach wasn’t sufficient to protect user privacy.
RELATED WHITEPAPER
Digital rights campaigners Open Rights Group also complained the opt-out model “proves once again to be wholly inadequate” to protect user rights.
The lawsuit is asking for compensation of $1,000 per Premium user for alleged violations of the US federal Stored Communications Act, along with an unspecified additional sum for breach of contract and the breach of California's Unfair Competition Law (UCL).
It also calls for the company to delete all AI models trained using improperly collected data.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Does speech recognition have a future in business tech?
Once a simple tool for dictation, speech recognition is being revolutionized by AI to improve customer experiences and drive inclusivity in the workforce
By Jonathan Weinberg Published
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Multichannel attacks are becoming a serious threat for enterprises – and AI is fueling the surge
News Organizations are seeing a steep rise in multichannel attacks fueled in part by an uptick in AI cyber crime, new research from SoSafe has found.
By George Fitzmaurice Published
-
12,000 API keys and passwords were found in a popular AI training dataset – experts say the issue is down to poor identity management
Analysis The discovery of almost 12,000 secrets in the archive of a popular AI training dataset is the result of the industry’s inability to keep up with the complexities of machine-machine authentication.
By Solomon Klappholz Published
-
Tech leaders worry AI innovation is outpacing governance
News Business execs have warned the current rate of AI innovation is outpacing governance practices.
By Emma Woollacott Published
-
LinkedIn has become a prime hunting ground for cyber criminals – here’s what you need to know
News Cyber criminals are flocking to LinkedIn to conduct social engineering campaigns, research shows.
By Solomon Klappholz Published
-
Hackers are using a new AI chatbot to wage cyber attacks: GhostGPT lets users write malicious code, create malware, and curate phishing emails – and it costs just $50 to use
News Researchers at Abnormal Security have warned about the rise of GhostGPT, a new chatbot used by cyber criminals to create malicious code and malware.
By Nicole Kobie Published
-
Government says breach to AWS-hosted MoD AI recruitment tool would have “concerning consequences”
News Personal data on defense personnel could be placed at serious risk
By Solomon Klappholz Published
-
What do security pros want from generative AI?
News Although still early in their adoption journey, cyber pros are optimistic about generative AI
By Solomon Klappholz Published