Does the US AI Action Plan add up and how will it change the global AI landscape?
Businesses should expect to feel benefits in the short term, especially AI developers with potential to land government contracts – but experts warn of risks on the horizon


The US has injected fuel into the AI arms race with an Action Plan — a sweeping set of aggressively deregulatory and integrative measures that collectively aim to cement American dominance in the technology.
"Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it is imperative that the United States and its allies win this race," wrote Michael J Kratsios, assistant to the President for science and technology; David O Sacks, special advisor for AI and crypto; and Marco Rubio, assistant to the President for national security affairs, in the policy.
The plan itself is focused on three core pillars: accelerating AI innovation, building American AI infrastructure, and leading in international AI diplomacy and security. Beyond the age-old adage of cutting metaphorical "rad tape", the measures also include plans to strip out perceived pro-liberal bias from AI systems and remove guardrails from AI development. Adnan Masood, chief AI architect at UST, tells ITPro this is a "win the race" blueprint that is significant for its "pivot to permissionless innovation".
There will undoubtedly be repercussions for those domestically and abroad, as the plan pledges to change the global economic and trade outlook. That said, it is important to remember that this is merely a plan – not a self-effectuating document, according to Cobun Zweifel-Keegan, managing director, DC at the International Association of Privacy Professionals (IAPP).
"Some of the actions outlined in the plan have already seen progress through executive orders, but many remain just ideas on a page. Nevertheless, it is a very helpful document toward understanding the Administration’s full slate of expected actions on AI and it’s the first real preview we have had into the current thinking on AI policy within the White House."
But how exactly should businesses expect to adapt to the new framework?
What's the point of the AI Action Plan?
The AI Action Plan rolls back on the capacity for various bodies in the US to regulate AI or add core safeguards while the technology is being developed. It contains more than 90 policy recommendations focusing on innovation, infrastructure, and protecting national security. If implemented, the plan has the potential to significantly change what AI developers should expect to do from a compliance standpoint. In other words, it's a "shoot first, ask questions later" approach to AI development. But the gamble may well pay off, according ot experts.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The AI Action Plan rolls back on the capacity for various bodies in the US to regulate AI or add core safeguards while the technology is being developed. It contains more than 90 policy recommendations focusing on innovation, infrastructure, and protecting national security. If implemented, the plan has the potential to significantly change what AI developers should expect to do from a compliance standpoint. In other words, it's a ‘shoot first, ask questions later’ approach to AI development. But the gamble may well pay off, according to experts.
"At its core, the plan treats AI not as just another tech sector, but as critical national infrastructure similar to energy or defense," Angeli Patel, executive director from UC Berkeley Law and Business and a practicing attorney, tells ITPro. "Its top priority is ensuring America, not authoritarian regimes, sets the global course for AI by doubling down on domestic strength and production while embedding democratic values of free speech into the systems shaping our future."
The Action Plan also elevates AI development by giving it the same importance as national infrastructure – positioning AI as 'protected' against foreign adversaries and domestic regulation. "That means more money, faster deployment, and a clear win for Big AI."
]Masood agrees that the thrust of the plan is around deregulating early, building fast and exporting American AI end-to-end, adding that it seems substantive enough to achieve these core aims both on paper and in its early implementation. "The plan is dense with agency tasks and the GSA [General Services Administration] vendor list is already live, signaling procurement intent. The crux is execution on permitting, grid, and fabs."
Big Tech companies are big winners
AI developers stand to gain vastly from the new measures, experts tell ITPro. With better access to compute, talent, and a mergers and acquisitions market that is heating up. Patel expects booms in privacy tech, cybersecurity and AI education: "For businesses generally, AI adoption will spread rapidly if it hasn't already – from backend ops to customer-facing products."
Masood believes the industry has plenty to look forward to, as an AI architect and practitioner himself. Promoting open-weight model development, where internal parameters used in the training of large language models (LLMs) are made public, is an exciting proposition in particular. This, combined with standardized evaluations and testbeds, as well as easier federal access and richer public scientific databases, will directly translate into faster time-to-pilot for new systems, more deployment options and clearer acceptance criteria in regulated industries.
The fact that the administration supports the rights of AI developers to use copyrighted materials in training AI models, under "fair use" conditions, is also a nod to developers, with some early district court decisions being favorable to AI developers, according to Arnold&Porter legal analysis. There are, however, some hesitations on whether these decisions will hold considering the Supreme Court has not weighed in, nor has Congress on deciding definitively which way the fair use question will fall.
"It’s been clear since February’s AI Summit in Paris – that also focused on Action – that the Trump administration was going to enable AI in an unencumbered way. So the copyright decision is no surprise," says Amanda Brock, CEO of the open source trade organization OpenUK.
"The content creators shouldn’t be too upset," Brock tells ITPro. "They haven’t won licensing fees for AI training but the reality is that supporting them is a long-term challenge, created by the digital age. This is not purely a consequence of AI and would never really be resolved by copyright. More appropriate long-term solutions are needed."
Meanwhile, the investment picture is looking particularly rosy as a result of the new AI policy recommendations. Masood highlights that, for the likes of Oracle and Microsoft, who are big government players, there will be higher public sector workloads. For chip makers, meanwhile, there will be a strong demand and a boost in sales toward allied and US markets, in light of export control tightening. The emphasis on domestic fabrication, too, supports long-term supply security.
Patel says that Oracle, Microsoft, Nvidia and data center providers like Equinix "stand to gain from expanded government procurement and strong policy tailwinds". She adds: "At minimum, they’ll benefit from favorable regulatory treatment. At best, they'd be able to bid for direct federal investment. One of the most significant benefits for these AI infrastructure giants is the streamlining, or outright removal, of environmental permitting for AI-related infrastructure."
What risks does the new approach to AI open up?
The biggest concern the experts have with the Action Plan is the additional risk that it might add. The Biden administration focused its AI policy on mitigating the risks of disinformation, among other areas. But the National Institute of Standards and Technology (NIST) will now remove any references to diversity, equity and inclusion (DEI) and climate change, as well as misinformation, from its AI risk management framework (RMF).
Patel notes that such disregard for issues like DEI risks narrowing the innovation pipeline in the long term. "The plan sidelines DEI-focused research, which is critical for addressing bias and underrepresentation in LLMs and threatens to withhold support from states or organizations that don’t align politically. That’s not just bad for equity; it’s bad for innovation. By rewarding compliance over creativity, the plan may accelerate dominance at the cost of resilience."
She adds that the biggest downside of the plan is its shortsightedness. "Investing in AI infrastructure while cutting corners on environmental protection and workforce development may help scale the technology over the next five to ten years, but at the cost of hollowing out our core," she explains.
"Job displacement is already underway, and misinformation is already costing billions. This plan accelerates both, with no meaningful guardrails in place. It’s also combative by design. By framing AI as an arms race with China, the plan treats AI as a geopolitical weapon rather than a global system that demands multilateral stewardship. It misses the opportunity to use interdependence as a tool for de-escalation and collective resilience."
Masood focuses on the possible risks in enterprise safety as well as the nightmare of dual compliance, where enterprises will have to invest resources into complying with both US policies as well as EU regulations. With fewer ex-ante rules, regulations devised to prevent disasters before they happen, there will be an uptick in policing failures, including critical bias, safety incidents and privacy breaches, after they occur through existing laws and torts.
In the coming months and years, states will react to the recommendations within the plan and act accordingly. The Trump administration has threatened to divert AI-related federal funding from states with "burdensome AI regulations" and this could have an effect on businesses within those states.
Though California's AI bill was quashed by state governor Gabin Newsom in 2024, Colorado's own AI bill targeting algorithmic discrimination comes into force from February 2026. Other states looking to introduce AI transparency legislation include New York – and IT leaders will need to closely follow how legislation develops in any state they operate within.

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.
-
Zyxel WBE510D review
Reviews This affordable dual-band Wi-Fi 7 access point will appeal to businesses that want the option of moving to 6GHz services at a time of their choosing
-
Why on-device intelligence is the new IT advantage
Sponsored content Local AI boosts speed, reduces costs, and safeguards enterprise data in the modern workplace