Why AI could be a legal nightmare for years to come

'AI' written on what appears to be a motherboward element soldered into place
(Image credit: Getty Images)

While the number of artificial intelligence (AI) tools on the market increases every day, and more businesses are eager to integrate generative AI into their workloads, regulation is still immature.

Many countries are investing heavily into the systems, with locked-in development cycles and an entrenched obsession on developing ‘world-beating’ AI systems to keep up with competing nations. 

READ MORE

But the spring period for AI may soon come to an end, under the harsh sun of pending regulation. In some regions, AI developers have less than a year to make sure their house is in order, or they could face the full force of the law.

Developers across the EU are staring down eye-watering fines if they implement AI systems in breach of privacy rights, while the US and UK have set out long-term goals of eliminating bias without impinging on innovation.

Legislation is bubbling across the world

Major legislation has been in the works for many years, but the speed at which it is being realized varies by region.

Jump to:

How the US is regulating AI

In the US, AI has been a point of focus for the White House since at least the Obama era. The Biden administration proposed its AI Bill of Rights in October 2022. The aim is to guide policy to protect US citizens from unsafe AI systems and biased algorithms

In May, the White House also released a National AI R&D Strategic Plan to guide safe and ethical investment into AI. In August the White House then launched the AI Cyber Challenge, a $20 million competition in collaboration with private sector AI firms such as Google, OpenAI, and Microsoft to fund developer bids for systems that can protect US critical national infrastructure (CNI).

RELATED RESOURCE

Whitepaper cover with title below image of purple and blue bar graph

(Image credit: AWS)

Discover the keys to successful AI implementation


DOWNLOAD FOR FREE


Public-private collaboration, rather than a carrot and stick approach, is at the heart of the US strategy. The White House has received voluntary commitments to uphold transparency and safety in AI development from 15 prominent software firms including OpenAI, Google, Microsoft, Amazon, IBM, and Nvidia.

In October 2023, the Biden administration announced its long awaited executive order on AI, consisting of a set of requirements organizations working with AI systems, be they generative or not, will have to adhere to. Under the order, AI developers will be required to share the results from testing their systems with the government before they are released.

The way organizations test their systems is also dictated by a set of strict guidelines set down by the NIST. Companies developing AI models that could threaten national security, the economy, or the health and safety of US citizens will also need to be shared with the government.

The strength of this approach is yet to be seen, however, with a series of regional hurdles standing in the way of a truly unified national framework governing the development and deployment of AI tools. Local governments in California and Illinois, for example, had already enacted AI regulations by the time Biden’s executive order was announced, and experts expect this piecemeal style of governance to continue.

How the EU is regulating AI

The EU’s AI Act has passed all the major hurdles to being signed into law. The bill seeks to regulate the development and implementation of AI systems according to the risks each poses. Developers working on ‘high-risk’ AI systems – including those that can infringe on the fundamental rights of citizens – will be subject to transparency requirements. These include, for example, disclosing training data.

The law also sets out criteria to define what an ‘unacceptable risk’ might be; early drafts suggested this includes systems capable of subliminally influencing users, and an 11th-hour addition added real-time biometrics detection.

The EU has set out penalties for non-compliance with its clear requirements for developer transparency and abiding by the risk profiles it sets out for AI tech – companies could pay €20 million ($21.4 million) or 4% of their annual worldwide turnover for non-compliance – akin to GDPR fines

How the UK is regulating AI

Through its AI Whitepaper, the UK wants to strike a balance between risk aversion and support for innovation. Some industry insiders hail the document, which is in consultation, as the right move, but the UK still lags behind the EU when it comes to AI law.

The UK government has rejected the classification of AI technologies by risk, with the hope that a contextual approach based on ‘principles’ such as safety, transparency, and contestability will encourage innovation.

While some welcome it, the UK government has left the door open to changes down the line that might muddy the picture. While it supports a non-statutory AI regulation, in which existing regulators apply AI principles where possible within their remit, it’s said this approach could be ripped up and replaced with a statutory framework. This depends on the results of its experiment, with harsher regulations and new regulatory powers on the cards.

Many developers appear happy with the UK government’s approach to date, but more could be put off by its mercurial nature and inability to give guarantees.

What about the rest of the world?

In Australia, the Albanese government began a consultation process on AI regulation that began in June 2023 and ran until August. Submissions will be used to inform the government’s future AI policies.

“Italy has already banned the use of ChatGPT and Ireland this week blocked the launch of Google Bard,” says Michael Queenan, CEO and co-founder of Nephos Technologies. 

“France and Germany have expressed interest in following in their footsteps. So, will they follow the AI Act or continue acting independently? The trouble is that you can’t legislate globally. Regulating AI use in the EU won’t stop it from being developed elsewhere; it will just take the innovation out of the region.”

How will EU regulations affect US companies?

The EU’s top-down approach to regulating AI could result in companies based in regions like the US, which still lacks a unified regulatory framework, gaining something of a competitive advantage over their European counterparts as they have fewer restrictions on what they can develop.

In the same vein, some have criticized the hard line stance the EU is taking with its AI Act, arguing that it could end up driving businesses out of the region. For example, OpenAI’s CEO Sam Altman hinted the company could leave the EU altogether if the regulations become too stifling for innovation. Notably Altman walked back this tough talk just days afterwards reassuring the public that OpenAI had “no plans to leave”.

READ MORE

A CGI render of the EU flag shown as 12 gold stars hovering and creating a ripple effect in a wave of blue data

(Image credit: Getty Images)

What's the EU's problem with open source?

“It remains to be seen if these new proposed solutions will offer a meaningful distinction,” Mona Schroedel, a data protection specialist at national law firm Freeths tells ITPro. “If a “homegrown” data center is run, staffed, and supported within the EU rather than outsourcing certain elements of the services then such an offering would provide a real alternative for companies wishing to avoid falling foul of the rules governing international transfers.  

“We have seen a number of data controllers seek exactly such a streamlined environment to avoid the complications of having to map the data flow within systems which may ultimately be accessible from outside the EU. For those in Europe, this is likely to be the best way to ensure that appropriate safeguards are in place at all times for compliance purposes.”

Firms in the US could benefit from the EU-US Data Transfer Framework, which is set to ease the transfer of data across the Atlantic. But experts doubt the framework will stand the test of time, with Gartner’s Nader Henein telling ITPro it’s likely to be overturned within five years. In this way, EU law on AI could prove a headache for US firms, especially if US regulation proves significantly different from the risk-based EU approach.

At the moment, it is virtually impossible for copyright holders to get any clarity on exactly where and how their intellectual property was used to train models. This opacity is cause for concern, sparking various conversations about how AI could kill art as we know it, and the release of text-to-image models like DALL-E 2 and StableDiffusion have only exacerbated these worries.

The primary criticism coming from artists is that AI-generated art is only possible because it has been trained on gargantuan amounts of art made by human artists. This has led to many making the argument that these systems are effectively stealing other artists’ work, and should be treated as plagiarism.

The legal implications of AI-generated content

The AI-generating artist ghostwriter977's album cover

(Image credit: ghostwriter977)

'Heart on My Sleeve’ by ghostwriter977 is an AI-generated song purporting to be a collaboration between Drake and The Weeknd. It used music from both to generate a virtual model of their voices.

Its creator could face legal action from Universal Music Group (UMG), which says the song breaches copyright. But it could prove a thorny case, as the question of whether using IP to to generate a new product constitutes fair use. 

Businesses operating outside of the media sector may not have noticed this shift as they are not directly affected. Very few non-media businesses use an internal art direction that could be used to train models. What’s more, the simplicity of using stock images instead may prove to be the easier solution to avoid sourcing concerns.

Regardless, the situation will inevitably come to an impasse, and the fallout from this legal time bomb will likely become harder to remediate the longer it’s left unaddressed.

A number of nations and regulatory bodies are mulling these worries at the moment, and these concerns are only going to be amplified as AI technologies become more capable.

AI developers in the EU will be required to disclose where they sourced the training data for any LLM, according to the AI Act, and this could give rights holders enough clarity to prevent a flood of lawsuits clogging up the region’s courts.

“What we need to see happen is that technology really needs to be accessible to individuals in a way that is open, transparent, can be audited and understood,” Liv Erickson, ecosystem development lead at Mozilla, tells ITPro.

“We really need to understand how to tackle these challenges emerging around bias, misinformation, and this black box of not necessarily understanding what these systems are being trained on.”

It’s also certain new technologies like real-time deepfakes will be weaponized in the coming years. Products like Microsoft Vall-E can already synthesize a person’s voice after only three seconds of input, and future models could allow for people’s voices or likenesses to be replicated to promote products, damage reputations, or perform spear phishing.

We are currently witnessing an AI gold rush, with the rate of development necessitating governments to commit to regulating the technology. Although the specific nature of the next step change in AI technology is unclear, we can be fairly confident that the legal systems will need to make some serious changes to accommodate it.

The EU’s reputation as something of a trailblazer when it comes to regulations in the 21st century. The region set precedents for how companies around the world protect their data with its GDPR law. This reputation is going nowhere fast as the region leads the pack in working through its regulatory framework for AI.

READ MORE

The search results on Google and Bard when you search for an African country that begins with K

(Image credit: Rory Bathgate/ITPro)

AI will kill Google Search if we aren't careful

Nevertheless, there is certainly scope for the US to show off its legislative bonafides as the home of many of the world’s leading AI companies. President Biden’s Inflation Reduction Act was a demonstration of power when it comes to stimulating the market, leaving the EU rushing to respond.If it can replicate this approach in AI, fuelling innovation while enforcing demands with regards to workers’ rights and reporting, the strength of US regulation could be the decisive factor for the future of the AI market. 

If the US can do the same with AI, fostering an ecosystem that promotes innovation while protecting workers’ rights, its AI regulations could be pivotal for the future of AI around the world.

Working alongside the UK on projects including the Atlantic Declaration should only bolster both countries’ reputations as leaders in artificial intelligence, but this is yet to be demonstrated in practice. The UK’s legislative attitude towards AI is unproven, and the impacts of its planned international AI hubs are yet to be seen.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.