White House targets close relationship with AI CEOs on safety

White House: A shot of President Biden speaking with the US flag in the background, and red light fringing around the image
(Image credit: Getty Images)

The White House has set its sights on a long-running collaboration involving AI leaders in the private sector to ensure the rapidly developing technology remains safe and ethical.

Big Tech CEOs attended a meeting on Thursday, with both President Joe Biden and Vice President Kamala Harris in attendance, which concluded with them committing to responsible AI development.

Vice President Harris welcomed Sam Altman, CEO at OpenAI, Dario Amodei, CEO at Anthropic, Satya Nadella, chairman and CEO at Microsoft, and Sundar Pichai, CEO at Google and Alphabet, to discuss current market approaches to AI.

Both Harris and Biden sought to impress the importance of risk-prevention strategies in AI development and achieved a commitment from the CEOs to continue to engage with the Biden administration on this.

Coinciding with the announcement of the meeting and the results of it, the National Science Foundation has also officially allocated $140 million in funding to seven new national institutes for AI research will work with federal agencies, industry bodies, and higher education institutes to develop ethical and trustworthy AI solutions.

“President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy,” read the White House statement.

“Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.”

The Biden administration has pointed to AI R&D as critical to approaching problems such as cyber security, energy, climate change, and education.

In recent months, generative AI has seen rapid growth through chatbots such as ChatGPT and Bard but has also prompted concerns.

Academic institutions have struggled with the lack of tools to detect AI-generated text, and notable tech pioneers called for a six-month pause of AI development in March over safety concerns.

US lawmakers have begun to explore “accountability measures” for AI, with the National Telecommunications and Information Administration (NTIA) set to launch a public consultation on AI services to inform White House policy.

The Office of Management and Budget has also announced that it will publish draft policy on the potential for AI systems to be used by federal departments and agencies. 

RELATED RESOURCE

Whitepaper with title and logos and summary on dark blue background

(Image credit: Juniper Networks)

IDC: Using cloud-based, AI-driven management to improve network operations

Reduce complexity through automation and limit time to resolution

DOWNLOAD FOR FREE

It is hoped that public comment on this draft, combined with an eventual model for federal implementation, could set a precedent for responsible AI use that can be replicated throughout the industry and in state and local governments.

Experts have advised AI firms to adopt greater transparency to avoid future regulatory penalties, particularly given that the EU’s AI Act has sought to place more responsibility on developers for misuse of their AI models.

The bloc’s latest draft of its long-awaited AI legislation also aims to shield businesses from IP theft, as well as set out clear guidelines for the AI systems that pose a risk to human rights such as those used for live facial recognition.

AI firms are increasingly under pressure to conform to regulatory requirements in the EU, with the recent case of Italy banning ChatGPT over GDPR concerns standing out as an early indication of the penalties that could be leveled against tech companies in the near future.

OpenAI has been told it needs to implement ‘right to be forgotten’ controls for ChatGPT in order to be allowed back to Italy, and EU data protection regulators could follow this example for other AI chatbots.

Rory Bathgate

Rory Bathgate is a staff writer at ITPro covering the latest news on UK networking and data protection, privacy and compliance. He can sometimes be found on the ITPro Podcast, swapping a keyboard for a microphone to discuss the latest in tech trends.

In his free time, Rory enjoys photography, video editing and graphic design alongside good science fiction. After graduating from the University of Kent with BA in English and American Literature, Rory took an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, after four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.