GDPR 2.0: What do Europe’s new AI rules mean for businesses?

Margrethe Vestager and Thierry Breton speaking at a press conference on AI in 2021

This article originally appeared in the May edition of IT Pro 20/20, available here. To sign up to receive each new issue in your inbox, click here.

In April, the European Commission (EC) unveiled plans to regulate artificial intelligence (AI). The first-of-its-kind rulebook includes bans on practices that “manipulate persons through subliminal techniques beyond their consciousness”, as well as the use of AI for mass surveillance by law enforcement and government-conducted social scoring, such as that currently employed in China.

While the move has been roundly applauded by privacy advocates and has been widely regarded as a welcome step in the right direction, the new regulations could also create sweeping changes for businesses that could seriously disrupt how they operate. Given that, in 2019, almost 40% of businesses used some kind of AI or machine learning technologies – a figure that has likely grown as a result of pandemic-fueled digital transformation – huge swathes of organisations could be forced to conduct risk assessments, or to continuously review AI systems.

Not doing so will have similar consequences to the General Data Protection Regulation (GDPR), which became the de facto privacy standard for many of the world’s largest companies after it came into effect in May 2018. Those that break the AI rules face fines up to 6% of their global turnover or €30 million, whichever is the higher figure.

Risky business

At the macro level, the EU’s new rules take aim at “high-risk” AI systems, such as facial recognition, self-driving cars, and AI systems used in the financial industry. In these areas, those deploying AI systems will need to undertake a risk assessment and take steps to mitigate any dangers; use high quality data sets to train the system; log activity so that AI decisions can be recorded and traced; keep detailed documentation on the system and its purpose to prove compliance with the law to government regulators; provide clear and adequate information to the user; have “appropriate human oversight measures”; ensure a “high level of robustness, security and accuracy.”

While this list of steps has been applauded by those with a keen eye on privacy, it’s unlikely they’ll be welcomed so fondly by those who have to ensure these measures are put in place. Ilia Kolochenko, CEO of ImmuniWeb, a global application security company that develops AI and ML technologies for SaaS-based application security solutions, believes the stringent requirements will be “arduous to implement it in practice.”

RELATED RESOURCE

2031: Reimagining the future of life and work

Sample our exclusive Business Briefing content

FREE DOWNLOAD

“For instance, assessment of high-risk AI systems will be a laborious and costly task that may also jeopardise many trade secrets of European companies,” he tells IT Pro. “Moreover, most of the AI systems are non-static and are continuously improved, thus new regulation will unlikely provide even a 90% guarantee that the system will remain adequate after the audit.

“Furthermore, the requisite explainability and traceability of AI output is oftentimes technically impossible. Finally, isolated AI regulation leaves the door widely open for traditional software offering the same capacities in high-risk areas of operations. In a nutshell, this timely idea certainly deserves further discussion and elaboration, however, practicality will be the key to its eventual success or failure.”

Guillaume Couneson, partner at law firm Linklaters in Brussels, thinks there could be other consequences for organisations operating in high-risk areas, and artificial intelligence technology itself.

“If inappropriately calibrated, this approach could stifle innovation and create barriers to the adoption of AI in the European Union,” he says.

“It will be important to keep flexibility on the categorisation, not only to add new high-risk uses in the future, but also to take out certain uses that would no longer be considered high-risk, for example because the use of an AI system in practice has evidenced that certain anticipated risks have not materialised or because individuals have become more accustomed to a particular use of AI systems.”

Black-box backlash

While the EU’s regulations take aim predominantly at those involved in the creation of high-risk AI systems, it’s unlikely that just these companies will be affected. With the use of AI and ML technologies becoming ever more prevalent, it’s hard to find businesses that are not relying on an algorithm for important decisions: Whether you qualify for a mortgage, how much we pay for our flights, what kind of advertisement we're shown and even the quality of customer support we receive.

Many of these organisations will be using a black box solution – a system that can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings – which often means organisations are unable to connect the dots of how these machines reach certain decisions.

Andre Franca, director of applied data science at deep-tech company causaLens warns: “If organisations are not able to understand how the machine reaches a decision, this can lead to disastrous and unfair outcomes. The regulation is pushing organisations to become more accountable for the AI machines they are deploying, which is pivotal as AI becomes more prevalent across industries.

"With fines of up to 4% of a firm’s global annual turnover, it's imperative that organisations are able to understand how their AI models work to prove compliance to regulators and the board. Despite criticisms of the proposal being vague and ambiguous, set recommendations and criteria offered by the European AI Board will mean businesses need to look at their existing AI models and what needs to be changed in order to comply.

“For businesses using causal AI – which by its nature is a glass box solution, allowing businesses to look under the bonnet and have full control, visibility and transparency of their model – they can breathe a sigh of relief that they have the necessary information, if the board or a regulator comes knocking.”

Watchful AI

Naturally, some industries will undoubtedly be under more scrutiny than others – and not just those developing the AI systems in question. This will be largely determined by the amount of personal and sensitive data they process, which means sectors such as healthcare, transport, essential public services could be among the hardest hit.

Franki Hackett, head of audit and ethics at AI and data specialists Engine B, tells IT Pro: “The level of personal and sensitive data used is often going to be the thing that determines the impact of these proposals, and a lot has already been said about how companies offering financial services, or healthcare, might be affected.

“Companies whose entire business model relies, for example, on scraping and re-using photographs from social media without explicit permission to develop face recognition algorithms will have to make big changes,” she added. “Others who use personal data to offer, for example, financial services might need more robust governance structures, and still others will find they’re not really affected at all.”

RELATED RESOURCE

The IT Pro Podcast: Can AI ever be ethical?

As AI grows in sophistication, how can we make sure it’s being developed responsibly?

FREE DOWNLOAD

Similarly, Camilla Winlo, director of consultancy at DQM GRC, believes there will be a huge number of sectors impacted by the AI regulations, from educational establishments and the emergency services to hardware manufacturers and border control. She advises that, while the rules are unlikely to come into force for some time yet, businesses need to start taking the appropriate steps now.

“I would recommend that for the organisations that are likely to be affected by the regulation to review their data protection impact assessments (DPIAs), supporting documentation and controls in light of the requirements, and to consider whether there are any gaps that will need to be filled,” she tells IT Pro. “This should be welcomed by the organisations in question, as it will improve their products. The rules are all about ensuring that products are attractive to potential customers, socially useful and minimise the risk of unwanted outcomes – all things any organisation should want, with or without a regulation.”

GDPR 2.0?

It remains to be seen whether organisations will welcome these changes, as many are likely fearful that it will arrive as GDPR 2.0 – forcing them to deal with unfamiliar rules and regulations, with the threat of a hefty fine if they fail to comply.

However, the regulations' arrival after GDPR could make the new rules easier to deal with, particularly for those businesses with stringent data privacy protections in place.

Emma Erskine-Fox, associate at UK law firm TLT, says: “Businesses that already have robust governance frameworks in place for GDPR compliance purposes will likely be at an advantage compared to those that are ‘starting from scratch’, but nevertheless, the new requirements will add another layer of complexity for all organisations dealing with AI technology.

Similarly, others don’t think the AI rulebook will be as big a disruption to businesses as GDPR.

Henrik Nordmark, director of science, data and innovation at data science company Profusion says: “These changes aren’t too much to ask. One of the reasons GDPR is a big ask is because every organisation, small or large, deals with data and they need to all do so safely.

“However, at the risk of oversimplifying things, these new rules on AI mainly impact those creating new AI systems. Most companies will need to do their due diligence in terms of selecting AI products and services from reputable companies, but will not necessarily be engaged in creating their own AI. And if they happen to be a company that is AI-centric, then they should definitely be thinking about the risks on society of what they create and how to mitigate those risks.

“This is analogous to safety laws that we put in place for pharmaceutical companies so that their inventions are also safe.”

Carly Page

Carly Page is a freelance technology journalist, editor and copywriter specialising in cyber security, B2B, and consumer technology. She has more than a decade of experience in the industry and has written for a range of publications including Forbes, IT Pro, the Metro, TechRadar, TechCrunch, TES, and WIRED, as well as offering copywriting and consultancy services. 

Prior to entering the weird and wonderful world of freelance journalism, Carly served as editor of tech tabloid The INQUIRER from 2012 and 2019. She is also a graduate of the University of Lincoln, where she earned a degree in journalism.

You can check out Carly's ramblings (and her dog) on Twitter, or email her at hello@carlypagewrites.co.uk.