ChatGPT needs ‘right to be forgotten’ tools to survive, Italian regulators demand
ChatGPT users in Italy could be granted tools to have false information changed under new rules
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
ChatGPT's developer OpenAI has been ordered to implement a ‘right to be forgotten’-style policy in the chatbot by the Italian data protection regulator (SA).
Data subject rights were among the most important considerations made by the Italian regulator in deciding ChatGPT’s long-term presence in the country, which has recently been in doubt.
The additional measures that must be implemented, per the Italian SA’s recent address, include the capability for users and non-users to request their personal information be changed if generated in ChatGPT user prompts.
“OpenAI will have to make available easily accessible tools to allow non-users to exercise their right to object to the processing of their personal data as relied upon for the operation of the algorithms,” the regulator said.
“The same right will have to be afforded to users if legitimate interest is chosen as the legal basis for processing their data,” it added.
The measures echo the so-called ‘right to be forgotten' - the data privacy rule that preceded GDPR and was ultimately included in the EU-wide regulations in 2018.
Since Italy banned the use of ChatGPT in the country earlier this month, a move that was branded ‘an overreaction’ by experts, talks have been ongoing between it and OpenAI - ChatGPT’s developer.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The result of these talks has led the California-based firm being given a ‘to-do’ list of changes before it can resume operating in the country.
OpenAI has been given a deadline of 30 April to comply with the numerous measures set out by the Italian SA.
These include changes to data processing transparency, the rights of data subjects, the legal basis of data processing for algorithmic training, and safeguards for minors.
GDPR and data subject rights
Data subject rights outlined under GDPR include eight fundamental tenets, including the right to withdraw consent for the use and processing of personal data.
Under GDPR, citizens are also entitled to the right to rectification under Article 16 of the legislation, meaning that data subjects can request “inaccurate or outdated personal information be updated or corrected”.
RELATED RESOURCE
Innovation to boost productivity and provide better data insights
Similarly, data subjects have the right to be forgotten, or the ‘right to erasure’, which enables them to request that their personal data be deleted.
In this context, the Italian data protection regulator appears concerned that given the potential for personal information to be disclosed via ChatGPT, this poses a risk to Italian citizens and is in breach of GDPR.
Large language models (LLMs) such as ChatGPT rely on huge volumes of information drawn from the internet to train AI models.
This has posed questions recently over how platforms such as ChatGPT may pose privacy risks - and the generation of incorrect information has been thrust firmly into the spotlight in this regard.
Last week, an Australian mayor mulled the prospect of legal action when ChatGPT generated false information which stated he was imprisoned for bribery.
In reality, Brian Hood, Mayor of Hepburn Shire Council, was a whistleblower and was neither arrested nor convicted on criminal charges.
Regulatory crackdown
Discussions around regulatory safeguards to mitigate the potential dangers of generative AI have raged since the launch of ChatGPT in November last year.
Earlier this week, US authorities launched a public consultation to explore potential “accountability measures” for companies developing AI systems such as ChatGPT.
The consultation could guide the development of future US legislation on AI safeguards to ensure responsible use.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Anthropic promises ‘Opus-level’ reasoning with new Claude Sonnet 4.6 modelNews The latest addition to the Claude family is explicitly intended to power AI agents, with pricing and capabilities designed to attract enterprise attention
-
Researchers call on password managers to beef up defensesNews Analysts at ETH Zurich called for cryptographic standard improvements after a host of password managers were found lacking
-
Kyndryl wants to help enterprises keep AI agents in line – and avoid costly compliance blundersNews Controls become machine‑readable policies that AI agents can read and must obey
-
OpenAI's Codex app is now available on macOS – and it’s free for some ChatGPT users for a limited timeNews OpenAI has rolled out the macOS app to help developers make more use of Codex in their work
-
Amazon’s rumored OpenAI investment points to a “lack of confidence” in Nova model rangeNews The hyperscaler is among a number of firms targeting investment in the company
-
OpenAI admits 'losing access to GPT‑4o will feel frustrating' for users – the company is pushing ahead with retirement plans anwayNews OpenAI has confirmed plans to retire its popular GPT-4o model in February, citing increased uptake of its newer GPT-5 model range.
-
‘In the model race, it still trails’: Meta’s huge AI spending plans show it’s struggling to keep pace with OpenAI and Google – Mark Zuckerberg thinks the launch of agents that ‘really work’ will be the keyNews Meta CEO Mark Zuckerberg promises new models this year "will be good" as the tech giant looks to catch up in the AI race
-
DeepSeek rocked Silicon Valley in January 2025 – one year on it looks set to shake things up again with a powerful new model releaseAnalysis The Chinese AI company sent Silicon Valley into meltdown last year and it could rock the boat again with an upcoming model
-
OpenAI says prompt injection attacks are a serious threat for AI browsers – and it’s a problem that’s ‘unlikely to ever be fully solved'News OpenAI details efforts to protect ChatGPT Atlas against prompt injection attacks
-
OpenAI says GPT-5.2-Codex is its ‘most advanced agentic coding model yet’ – here’s what developers and cyber teams can expectNews GPT-5.2 Codex is available immediately for paid ChatGPT users and API access will be rolled out in “coming weeks”