Italy’s ChatGPT ban branded an “overreaction” by experts

Somebody holding a smartphone with ChatGPT written on its face
(Image credit: Getty Images)

The decision by Italian regulators to ban access to ChatGPT has drawn criticism from industry experts, branding it a hindrance to innovation and a distraction from legitimate concerns around AI and privacy.

Italian authorities banned access to ChatGPT at the end of March. The degree to which the technology collects data for algorithmic training purposes was deemed to have no legal basis.

The lack of age verification embedded in the technology, which could shield underage users from inappropriate content, was also highlighted as a key issue.

The Italian data protection authority, Garante per la protezione dei dati personali, issued a decision on 31 March declaring that it would temporarily suspend the processing of Italian user data by OpenAI.

OpenAI has restricted access to ChatGPT in Italy while it works with Garante to establish an understanding.

Those who purchased ChatGPT Plus subscriptions in March will receive a refund, and all subscribers in the region have had their recurring payments paused.

The ban prompted debate among AI and legal experts who have questioned the legitimacy and wiseness of the decision.

Andy Patel, researcher at WithSecure, called the ruling an “overreaction” and warned that it could put Italy at a disadvantage when it comes to AI development.

“ChatGPT is a useful tool that enables creativity and productivity - by shutting it off, Italy has cut off perhaps the most important tool available to our generation," he said.

"All companies have security concerns, and, of course, employees should be instructed to not provide ChatGPT and similar systems with company-sensitive data. Such policies should be controlled by individual organisations and not by the host country.”

Others have suggested that the ban will not last, and is a distraction from the wider issues of privacy and security that must be addressed when it comes to large language models (LLMs).

RELATED RESOURCE

Selecting a fit-for-purpose server platform for datacentre infrastructure

Driving the change in infrastructure

FREE DOWNLOAD

“‘Banning’ these models - whatever that term means in this context - is simply encouraging more perfidy on the part of these companies to restrict access and concentrates more power in the hands of tech giants who are able to sink the money into training such models,” said Erick Galinkin, principal artificial intelligence researcher at Rapid7.

“Rather, we should be looking for more openness around what data is collected, how it is collected, and how the models are trained.”

Concerns around the centralisation of generative AI in the hands of big tech at present have led to calls for a 'democratisation' of the technology.

AI model leakers have called for the stolen Meta LLM LLaMA to be stored on Bitcoin to maintain free distribution, while AWS and Hugging Face partnered to improve access to models.

ChatGPT raised eyebrows last month when a privacy flaw exposed users' chatbot interactions, the first real blow to the company's image since its early GPT-3 tests were branded 'too dangerous' for public use.

Michael Covington, VP of strategy at software firm Jamf, noted that there is value in a pause to ensure that AI technology proceeds in a controlled manner.

“That said, I get concerned when I see attempts to regulate common sense and force one 'truth' over another,” he added.

“At Jamf, we believe in educating users about data privacy, and empowering them with more control and decision-making authority over what data they are willing to share with third parties. Restricting the technology out of fear for users giving too much to any AI service could stunt the growth of tools like ChatGPT, which has incredible potential to transform the ways we work.”

The Italian authority also asked for OpenAI to provide notice of measures implemented to comply with its order, or face a fine as large as €20 million ($21 million) or 4% of the company’s worldwide annual turnover.

Could other countries ban ChatGPT?

In becoming the first European country to bar access to ChatGPT, Italy has set a precedent that other countries could follow.

While the ban is in place it has joined the likes of Russia, Iran, and North Korea which all ban ChatGPT as part of wider internet censorship.

OpenAI has its services geoblocked in China, meaning that businesses and consumers are unable to access ChatGPT and DALL·E in the region.

Domestic companies such as Baidu have unveiled alternative chatbots, but as yet none have demonstrated abilities on par with OpenAI’s GPT-4.

Reuters reported that the Irish Data Protection Commission (DPC) and French Commission nationale de l'informatique et des libertés (CNIL) are in discussions with their Italian counterparts to establish the basis for the decision.

If the discussion proves convincing, regulatory bodies across the EU could soon require further privacy commitments from OpenAI.

“The Garante’s decision is a timely reminder that the excitement around generative AI must be tempered with caution,” Will Richmond-Coggan, a partner at national law firm Freeths specialising in privacy and technology, told IT Pro.

RELATED RESOURCE

The newest approach: Stopping bots without CAPTCHAs

Reducing friction for improved online customer experiences

FREE DOWNLOAD

“OpenAI now have an opportunity to show how it has been gathering and using personal data to train its tool in a way that is compatible with GDPR. If it can’t, it seems likely that other European supervisory authorities may follow Italy’s lead.”

Richmond-Coggan noted that the focus on the impact of ChatGPT on children and questions around age-appropriate content will keep this regulatory interest alive, and that this is a issue with which AI developers will have to contend for some time.

“With the prospect of future legislation in the UK and Europe directed both to online harms and more focused on regulating AI technologies, this is likely to be only the first of a large number of regulatory hurdles,” he added.

The government’s recently released AI whitepaper identifies fairness as an attribute necessary for AI innovation, and specifies that this pertains to the compliance of AI systems with laws such as UK GDPR.

The government has also sought to outline the transparency and redress requirements by which firms operating in the space will have to abide.

Currently, AI models operate largely on a ‘black box’ principle, in which users or even business partners have little to no oversight of the data used to train models, or what data is processed to improve models.

But the government’s approach has also been explicitly pro-innovation, and was praised by industry leaders synch as Microsoft UK CEO Clare Barclay as a “commitment to being at the forefront of progress”.

In the near future, AI companies could instead be compelled to shed more light on their data scraping, processing, and training activities.

A recent panel of experts noted that greater AI transparency is necessary to avoid future regulatory penalities, and this debate is will only intensify in the coming years.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.