White House targets close relationship with AI CEOs on safety
Biden admin has called for greater controls on AI, while stressing the tech’s importance
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
The White House has set its sights on a long-running collaboration involving AI leaders in the private sector to ensure the rapidly developing technology remains safe and ethical.
Big Tech CEOs attended a meeting on Thursday, with both President Joe Biden and Vice President Kamala Harris in attendance, which concluded with them committing to responsible AI development.
Vice President Harris welcomed Sam Altman, CEO at OpenAI, Dario Amodei, CEO at Anthropic, Satya Nadella, chairman and CEO at Microsoft, and Sundar Pichai, CEO at Google and Alphabet, to discuss current market approaches to AI.
Both Harris and Biden sought to impress the importance of risk-prevention strategies in AI development and achieved a commitment from the CEOs to continue to engage with the Biden administration on this.
Coinciding with the announcement of the meeting and the results of it, the National Science Foundation has also officially allocated $140 million in funding to seven new national institutes for AI research will work with federal agencies, industry bodies, and higher education institutes to develop ethical and trustworthy AI solutions.
“President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy,” read the White House statement.
“Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.”
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The Biden administration has pointed to AI R&D as critical to approaching problems such as cyber security, energy, climate change, and education.
In recent months, generative AI has seen rapid growth through chatbots such as ChatGPT and Bard but has also prompted concerns.
Academic institutions have struggled with the lack of tools to detect AI-generated text, and notable tech pioneers called for a six-month pause of AI development in March over safety concerns.
US lawmakers have begun to explore “accountability measures” for AI, with the National Telecommunications and Information Administration (NTIA) set to launch a public consultation on AI services to inform White House policy.
The Office of Management and Budget has also announced that it will publish draft policy on the potential for AI systems to be used by federal departments and agencies.
RELATED RESOURCE
IDC: Using cloud-based, AI-driven management to improve network operations
Reduce complexity through automation and limit time to resolution
It is hoped that public comment on this draft, combined with an eventual model for federal implementation, could set a precedent for responsible AI use that can be replicated throughout the industry and in state and local governments.
Experts have advised AI firms to adopt greater transparency to avoid future regulatory penalties, particularly given that the EU’s AI Act has sought to place more responsibility on developers for misuse of their AI models.
The bloc’s latest draft of its long-awaited AI legislation also aims to shield businesses from IP theft, as well as set out clear guidelines for the AI systems that pose a risk to human rights such as those used for live facial recognition.
AI firms are increasingly under pressure to conform to regulatory requirements in the EU, with the recent case of Italy banning ChatGPT over GDPR concerns standing out as an early indication of the penalties that could be leveled against tech companies in the near future.
OpenAI has been told it needs to implement ‘right to be forgotten’ controls for ChatGPT in order to be allowed back to Italy, and EU data protection regulators could follow this example for other AI chatbots.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
CISOs are keen on agentic AI, but they’re not going all-in yetNews Many security leaders face acute talent shortages and are looking to upskill workers
-
Why Amazon’s ‘go build it’ AI strategy aligns with OpenAI’s big enterprise pushNews OpenAI and Amazon are both vying to offer customers DIY-style AI development services
-
Why Amazon’s ‘go build it’ AI strategy aligns with OpenAI’s big enterprise pushNews OpenAI and Amazon are both vying to offer customers DIY-style AI development services
-
February rundown: SaaS-pocalypse now?ITPro Podcast Geopolitical uncertainty is intensifying public and private sector focus on true sovereign workloads
-
‘A huge vote of confidence’: London set to host OpenAI's largest research hub outside USNews OpenAI wants to capitalize on the UK’s “world-class” talent in areas such as machine learning
-
Sam Altman just said what everyone is thinking about AI layoffsNews AI layoff claims are overblown and increasingly used as an excuse for “traditional drivers” when implementing job cuts
-
OpenAI's Codex app is now available on macOS – and it’s free for some ChatGPT users for a limited timeNews OpenAI has rolled out the macOS app to help developers make more use of Codex in their work
-
Amazon’s rumored OpenAI investment points to a “lack of confidence” in Nova model rangeNews The hyperscaler is among a number of firms targeting investment in the company
-
OpenAI admits 'losing access to GPT‑4o will feel frustrating' for users – the company is pushing ahead with retirement plans anwayNews OpenAI has confirmed plans to retire its popular GPT-4o model in February, citing increased uptake of its newer GPT-5 model range.
-
‘In the model race, it still trails’: Meta’s huge AI spending plans show it’s struggling to keep pace with OpenAI and Google – Mark Zuckerberg thinks the launch of agents that ‘really work’ will be the keyNews Meta CEO Mark Zuckerberg promises new models this year "will be good" as the tech giant looks to catch up in the AI race