"Responsibility and innovation are not opposites" – AWS thinks global alignment on AI regulation is possible but must be risk-based
Serious discussions over global AI alignment will be needed in coming years to ensure no regions or nations are left behind
With the advent of early automobiles in the 19th century, the UK introduced legislation to protect existing transport industries such as rail networks and stagecoaches.
One of the most notorious of these laws was the ‘Red Flags Act’, restricted the speed of ‘horse-less’ vehicles to just 2mph in built-up areas and 4mph in the countryside. Worse still, the legislation required three vehicle operators: two in the vehicle itself, and another walking ahead carrying a red flag to inform oncoming traffic that an automobile was coming.
The result of this, according to Sasha Rubel, head of public AI policy at AWS, was a hammer blow to an industry during its critical embryonic stages.
Speaking to ITPro at AWS re:Invent 2025, held in Las Vegas, Rubel says this is a perfect analogy to describe the current challenge industry and governments alike face with generative AI.
Driving growth of the industry and delivering on the potential of the technology will require a delicate balancing act in the coming years. Yet conflicting regulatory positions on both sides of the Atlantic pose a serious risk to progress and often leave businesses confused.
“For me, this is a really important lesson from history, because it shows that we not only need to have a shared understanding of what risk and misuse of the technology is, but if you overregulate a technology mitigating for every single possible misuse, you actually miss out on the benefits that this technology represents,” she tells ITPro.
“We need to focus not only on the risks of misuse of the technology, we also need to focus on what it means if we miss out – the missed use of the technology – if we miss out on what this opportunity represents, not only in terms of the economic benefits to GDP and European competitiveness, but more fundamentally to the beneficial use of what this technology represents for everyday life of people in Europe,” Rubel adds.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Rubel says discussions on AI regulation, particularly in Europe in recent months, show there’s now a “growing consciousness” that a more aligned international approach to the technology will be needed.
Robust regulatory frameworks may have noble intentions, but there’s a risk that without clear communication some regions globally may be left behind in the ongoing AI race.
“We see that in the policy conversations that are happening that we need to simplify rules, and we need to align internationally on what those rules look like in order to make sure that Europe and the United Kingdom remain competitive in this space.”
Misalignment is costing businesses big
The impact of this misaligned approach to AI regulation is already being felt by both providers and enterprises alike, Rubel says. In a study conducted by Strand Partners on behalf of AWS, more than two-thirds (68%) of organizations in the EU don’t understand their obligations under the EU AI Act.
Further, the study found companies that aren’t sure about compliance typically invest up to 30% less in technology year on year. To add insult to injury, the sheer complexity of compliance means the function ends up accounting for around 40% of overall IT spend at some enterprises.
“They’re afraid that they don’t understand the rules and that they’ll be fined because of the complexity of rules,” she says. “I hear every day from customers saying, ‘can you explain to us the interplay between the EU AI Act and the GDPR and the EU Copyright Directive’.”
Ultimately, reducing complexity in this regard will have a positive downstream effect on compliance costs, Rubel says. When businesses know they’re compliant and operating within established rules, this is conducive to innovation.
“Reducing those compliance costs by mainstreaming rules is really essential,” she says. “It allows startups to be competitive at the global level, but it also allows for an approach that’s globally aligned.”
Responsibility builds trust; trust drives innovation
“Responsibility and innovation need to go hand in hand,” Rubel tells ITPro. This is a point AWS has been keen to emphasize and promote in recent years, and is a longstanding mantra at the company.
Development policies that are responsible at heart ultimately reduce risk, Rubel says. First and foremost, acknowledging the risks associated with the technology will be critical.
“Responsibility and innovation are not opposites,” she adds. “Responsibility drives trust, which is one of the biggest blockers to AI adoption beyond regulatory uncertainty. That trust drives adoption, and that adoption drives innovation.”
Secondly, bringing together relevant stakeholders from various domains to tackle these risks collectively will be equally crucial. In doing so, Rubel believes this will be the first key step toward fostering broader global alignment.
Going forward, she calls for a risk-based approach developed by industry, academia, government, and civil society to ensure alignment and enable innovation.
Is regulatory alignment a pipe dream?
Achieving global alignment is easier said than done, however. There are economic, social, and geopolitical considerations on both sides of the Atlantic.
The lack of alignment between the United Kingdom, United States, and European Union highlights this, with the latter pursuing a harder approach to the laissez faire style across the pond.
Discussions about AI legislation in the US have proved controversial in recent months, with a rift emerging over federal and state-based approaches to regulating the technology.
Business leaders themselves also appear conscious of geopolitical factors at present, particularly with regard to issues like data sovereignty and reliance on foreign infrastructure providers.
In a survey conducted by Civo, UK-based IT leaders voiced serious concerns about the influence of US cloud providers in the wake of tariffs imposed by the Trump administration. More than half (60%) of respondents said the UK government should go so far as to cut its use of US cloud services.
While that research came specifically in response to economic strategy in the US, it does point toward a growing sentiment of isolationist-style, sovereign AI approaches. Political unions like the EU want their data kept in-region, governments want their data kept in-country, and so do the enterprises operating in those individual regions and nations.
If an aligned approach on data storage is being called into account, it's even more important that industry and public bodies come together to define clear, reproducible standards that lock in safety and innovation.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Partners have been ‘critical from day one’ at AWS, and the company’s agentic AI drive means they’re more important than everNews The hyperscaler is leaning on its extensive ties with channel partners and systems integrators to drive AI adoption
-
Cisco named official partner of Madison Square Garden in multi-year network overhaulNews The tech giant will underpin the digital infrastructure of ‘The World’s Most Famous Arena’ with AI-ready data centres and advanced wireless connectivity