The White House wants US government agencies to hire a chief artificial intelligence officer — here's why

Picture of the White House. home of the President of the United States, pictured in July 2023.
(Image credit: Getty Images)

US federal agencies will have to appoint a chief artificial intelligence officer (CAIO) and publish a list of their AI use cases under new guidelines issued by the White House Office of Management and Budget (OMB).

The new policies aim to encourage federal agencies to harness AI effectively while mitigating the risks of the technology by strengthening governance.

The new rules set in place a deadline of 1 December 2024 for federal agencies to establish “concrete safeguards” when using AI in a way that could impact rights or safety.

These safeguards include ways to test and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparent information on how the government is using AI.

If an agency cannot apply these safeguards, the agency must cease using the AI system, unless this would create an “unacceptable impediment” to critical agency operations, the OMB said. Agencies should also consult unions in an attempt to mitigate AI’s potential impact on the workforce.

Under the guidance, agencies have 60 days to appoint a CAIO, who will coordinate the use of AI across their agencies and establish AI governance boards to govern the use of AI. 

The Departments of Defense, Veterans Affairs, Housing and Urban Development, and State have established these governance bodies and some of other agencies have until May 27, 2024 to do the same, the OMB said.

Agencies can designate an existing official, such as a CIO or CTO, to fill the role of CAIO. This individual will have the primary responsibility in their agencies for coordinating their agency’s use of AI, promoting AI innovation, and managing risks from the use of AI.

That involves creating the annual AI use case inventory, working with HR on AI skills, removing barriers to the responsible use of AI in the agency, and also identifying and managing risks from the use of AI.

These safeguards could mean, for example, that travelers will have the ability to opt out from the use of facial recognition at the airport without losing their place in line, the OMB said.

Similarly, when AI is used in the federal healthcare system as part of diagnostics decisions, a human will be required to oversee the process to verify results and ensure AI does not create disparities in healthcare access.

RELATED WHITEPAPER

Whitepaper cover with title over image of colleagues chatting in an office with red circular digital icons around them

(Image credit: Zscaler)

Prevent costly data breaches

Federal agencies will have to be clear about when they are using AI through releasing annual inventories of their AI use cases, including identifying use cases that impact rights or safety and how the agency is addressing the relevant risks.

They will also have to release government-owned AI code, models, and data, where such releases do not pose a risk.

The OMB is keen for agencies to use AI, noting that the technology presents “tremendous opportunities to help agencies address society’s most pressing challenges”.

It pointed to examples including the Federal Emergency Management Agency’s (FEMA) use of AI to review and assess structural damage in the aftermath of hurricanes, as well as the National Oceanic and Atmospheric Administration’s (NOAA) use of AI to conduct more accurate forecasting of extreme weather, flooding, and wildfires.

“Advances in generative AI are expanding these opportunities, and OMB’s guidance encourages agencies to responsibly experiment with generative AI, with adequate safeguards in place. Many agencies have already started this work, including through using AI chatbots to improve customer experiences and other AI pilots,” it said.

Government bodies across the US are trying to work out where and when and how to use AI.

For example, New York state recently unveiled its own ‘Acceptable Use of Artificial Intelligence Technologies’ policy to cover the broader use of systems that use machine learning, large language models, natural language processing and generative AI.

Steve Ranger

Steve Ranger is an award-winning reporter and editor who writes about technology and business. Previously he was the editorial director at ZDNET and the editor of silicon.com.