Why Neuromancer's warnings could shape tomorrow's laws

A noir detective illustration, coloured in red and dark blue, showing a man wearing fedora hat, holding gun, and smoking cigar
(Image credit: Getty Images)

Neuromancer is going to turn 40 in a few months. Getting old is often painful for science fiction novels, as the big guesses they make fail to come true, and the unconscious assumptions of one age become embarrassingly outdated in the next.

And, yet for me, William Gibson’s hard-bitten future-noir tale of hackers, AI and double-cross is still as relevant as ever.

The front cover of William Gibson's Neuromancer, showing an abstract depiction of different sized buildings and windows stuck together

(Image credit: Alamy / ruelleruelle)

Certainly, back in the early 1980s Gibson missed some of the technology shifts ahead: when I interviewed him years later, he pointed out that the book is set in a future without cell phones. But he absolutely got it right that AI models – and the laws governing them – would be one of the biggest issues ahead for any society.

Neuromancer presents a vision of the future where general AI is very carefully regulated so that it can’t ever get too smart. There’s even an agency, the aptly-named Turing Registry (or the Turing Police), that stops any AI system from evolving further, using whatever means necessary.

As one of the characters explains, the minute that an AI starts figuring out ways to make itself smarter: “Turing’ll wipe it.” As the book says: “Every AI ever built has an electromagnetic shotgun wired to its forehead.”

So how does all this compare to our world? Firstly, despite the waves of hype, we are years (and most likely decades) away from having to deal with the arrival of any meaningful form of human-level general AI that could pose a risk – the sort envisioned by Neuromancer. It’s hard to say whether we’d even recognize it if it did arrive.

Policing such a system would also be significantly harder in reality – AI doesn't live in just one computer, so there’s never going to be one computer to wipe and no one forehead to threaten. Instead, these large language models are distributed across the cloud, are available as open source, and are even running as lightweight versions on your phone or laptop. Good luck keeping tabs on all of those.

The good news is that we are starting to build legislation to regulate AI – even if we lack the Turing cops to enforce it. Right now, the EU is the first body to come up with something that it says will ensure that AI systems used across the region will be safe and respect the fundamental rights of humans.

The idea here is to regulate AI based on the potential for risk: the higher the risk, as determined by the EU, the stricter the rules. The law aims to ban the most controversial uses of AI, like cognitive behavioral manipulation, the scraping of facial recognition data from the internet or CCTV footage, as well as emotion recognition in the workplace or education or social scoring.

There are also rules for the general purpose AI models which are capable of tasks like generating video, text, images, conversing in natural language, computing, or writing code. And there’s a stricter regime for ‘high impact’ foundation models because they could create “systemic risks along the value chain”.

So far so good, and there are hopes that the EU law could be a template for further laws around the world.

Until we see the detail of the legislation, it’s going to be hard to assess the real impact, however. The law isn’t going to come into force for a couple of years yet – decades in AI evolutionary terms. And, of course, it will only cover AI sold or used in the EU, while much of the momentum around AI is elsewhere, notably the US and China. And the law includes exceptions for law enforcement and military AI usage, too.

It's absolutely essential to make sure that algorithms and AI that might literally oversee life-and-death decisions – and the people building them and running them – can be challenged and held accountable, or else we will simply reinforce biases that are already all too evident in society.

Conspiracy to augment an artificial intelligence

A real challenge over the next decade or two will be the gradual automation of large numbers of jobs – perhaps as many as a third, many of them currently filled by well-paid professionals.

That could create profound dislocation even if it creates new jobs. Those impacts will not be felt evenly, either within an industry or a country, making it all but impossible to regulate.

In many cases it will be down to the strategy of the companies or individual executives using AI tools to decide whether they want to use these new technologies to augment jobs, or to automate them.

RELATED RESOURCE

Brain hovering above a chip on a motherboard, denoting AI and hardware

(Image credit: Getty Images)

The enterprise’s guide for Generative AI

Learn about the impact GenAI is having on businesses

DOWNLOAD NOW

The impact of decisions made by middle managers who decide to use AI to cut costs could be just as brutal as any brilliant plot from a malevolent AI. One of the themes of Neuromancer – the idea that there are people who will willingly develop and enhance technologies even if that risks unforeseen damage to society – rings depressingly true.

Lawmakers are heading in the right direction in terms of banning the most obvious bad uses of AI. But we’re still unlikely to ever end up with a law that makes ‘conspiracy to augment an artificial intelligence’ a crime, like it is in Neuromancer.

This probably won’t matter – it will be the smaller decisions, not the bigger ones, that will really decide the true impact of AI over the next few years.

Steve Ranger

Steve Ranger is an award-winning reporter and editor who writes about technology and business. Previously he was the editorial director at ZDNET and the editor of silicon.com.