Can the AI Bill of Rights shape global AI regulation?

The US flag with a digital effect overlaid onto it

With innovation changing the way of life and the way of doing business immeasurably in recent years, there are growing calls for regulation to oversee the pace of change in a controlled way. Artificial intelligence (AI), in particular, is an area that could most do well with structures to guide its development, although governments around the world have been slow to the punch, with the technology advancing exponentially in its capabilities.

In October, the White House’s Office of Science and Technology Policy announced a "blueprint" for an AI Bill of Rights. As the EU eyes its own AI legislation, and China moves towards further AI regulation, the American government has conceptualised a framework for AI practitioners to work towards. In its own words, it's a white paper "intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems".

The AI Bill of Rights has no teeth – yet

Not all experts with skin in the game are convinced this is an ideal solution. The document lays out a wide range of principles – from data privacy and discrimination prevention to so-called “human alternatives” and a directive on “notice and explanation". But industry leaders like Aible founder and CEO, and former Harvard senior fellow, Arijit Sengupta, are concerned. He says some interested parties – particularly in the media – have jumped too quickly to talk about enforcement before considering the real-world impacts of any framework if translated into formal regulation.

“The only good thing about the Bill of Rights is that it has no teeth," he says. "If it had teeth, it would be even worse.

"Think of it this way, when new technologies come about, historically, what the US has always done is try to create a playing field where innovation could thrive. Our starting point was how do we make sure we are the leader in this technology? Interestingly enough, when you're looking at things like the AI Bill of Rights, that concept has gone out of the picture.”

Sengupta thinks the AI bill of rights, while possibly stifling innovation, is operating “from a place of fear”. While he welcomes the document's intentions – lest the space devolves into a dangerously discriminatory place to operate – the white paper’s authors need to take a deep look at the vagueness of the definitions of terms like "bias".


Forrester Report: Automate or die

Process automation has become a strategic imperative


This level of scepticism isn't universally shared, though. Many in the space, like industry expert Sam Zegas, see the blueprint as a good first step and disagree with Sengupta's pessimism. Zegas is VP of operations at language AI firm, Deepgram, and was formerly a management consultant with the US Foreign Service.

“I agree with it being a set of guidelines," he explains, adding it would, of course, need more thought before it shifts towards any kind of formalisation, implementation and enforcement. "This isn't actually legislation, it's a position that the government is putting out there hoping people engage with, but it's not enforced in any way. I think that's the right approach because it's a very fast-moving space. I think there are a lot of different applications that may be different interpretations of these guidelines.”

Dr Chris Hazard, co-founder and CTO at Diveplane, and previously an employee at the US Department of Defence, agrees. For him, the framework can be made more valuable by thinking about second and third-order effects – like what incentives could spring up as a result of the document becoming more actionable, and how the US can balance business development to push AI’s corporate development.

“It's just a recommendation – a draft. I think that's exactly right, because it's so easy to get the legislation wrong," he warns. "If you get it wrong, you can stifle innovation, you can have people building the wrong things, and everyone spending a lot of money building things that don't actually help, but [do] meet the law.”

Other positives Zegas presents include the document's link with the ways in which the country has, in his view, previously looked to increase equity. “Rights-based frameworks have been incredibly influential in the last century in advancing the quality of the human experience in the United States and so I very much am in support of bringing a rights-based framework into the AI space.”

Can US AI regulation be a role model for the world?

Of course, none of this happens in a vacuum. Hazard says when the US thinks about the implementation of the AI Bill of Rights and how AI ethics can be further advanced, it needs to be thinking about how to lead globally.

“I think because of the market power of the United States, and the social and economic influences that [come with that], showing the United States is being thoughtful about this and is moving in the direction that many other companies or countries that lead in AI … shows that this is a global thing.”

Sengupta, on the other hand, isn’t convinced the US’ influence can be taken for granted. For him, the US has to be mindful if it wants to make its voice heard rather than relying on its historical dominance. “The point in time when the world naturally did what America did has passed," he says. "I think what's important for America to understand is that you can only lead by having a better example. You can’t just lead by example, right? And the issue right now is the approach they're taking is not a better example.”

Sengupta characterises the EU’s approach as “micromanaging industry” and China’s as focus on “societal cohesion". His worry about the bill of rights is that it will create so much bureaucracy that it'll overwhelm smaller companies, to the point they question whether it's viable to even attempt to compete in the industry.

On a global scale, Zegas says another risk is that an opt-out procedure related to bias with too much uptick could artificially sway the results being shown by any AI implementation. For his company, one that works in language, the variability of the data set, in his view, means that a complete balance of bias is impossible.


The business value of IBM AI-powered automation solutions

Improved business operations, processes, and results


“The guidelines promote the idea that people should be able to opt-out of having their data included in algorithms and I think there's an interesting trade-off there which is that, for example, if a certain minority group were to were to opt out of including their data in a Deepgram algorithm that would actually bias our systems away from being able to serve them and their choice to do that would actually negatively impact the equitability of the product that we could put out.”

Meanwhile, Zegas says consumer choice is good and that the US is positioned, with the document, to exist in a “Goldilocks zone” between the highly regulated EU and the less controlled – but less IP conscious – China. He urges legislators to strike a balance that gives people enough freedom to innovate while freeing them from bureaucracy.

In the midst of the chaos, some are coming together to talk about equitability in AI. Hazard, whose company is part of the Data and Trust Alliance – which includes companies like IBM, Meta and UPS – says part of their work includes discussions around hiring practices, mergers and acquisitions, and promoting good data.

This work ensures companies are committed to not including unethically sourced data in machine learning and AI algorithms, while regulators across the world endure the long, hard slog of codifying such principles into law.

John Loeppky is a British-Canadian disabled freelance writer based in Regina, Saskatchewan. His work has appeared for the CBC, FiveThirtyEight, Defector, and a multitude of others. John most often writes about disability, sport, media, technology, and art. His goal in life is to have an entertaining obituary to read.