US starts exploring “accountability measures” to keep AI companies in check

neon blue human head (right-side profile) with particle overlay to denote AI
(Image credit: Getty Images)

Lawmakers in the US are set to explore potential “accountability measures” for companies developing artificial intelligence (AI) systems such as ChatGPT amid concerns over economic and societal impacts. 

The National Telecommunications and Information Administration (NTIA), the US agency which provides advice to the government on technology policies, said it will launch a public consultation on AI products and services. 

According to the NTIA, insights gathered from this consultation will help inform the Biden administration to develop a “cohesive and comprehensive federal government approach to AI-related risks and opportunities”. 

“NTIA’s ‘AI Accountability Policy Request for Comment’ seeks feedback on what policies can support the development of AI audits, assessments, certifications, and other mechanisms to create earned trust in AI systems that they work as claimed,” the department said in a statement on Tuesday. 

In its statement, the NTIA said that potential audits of AI systems could work in a similar fashion to those conducted in the financial services industry to “provide assurance that an AI system is trustworthy”.

NTIA administrator Alan Davidson said the consultation will help inform the US administration’s long-term approach to AI products and prevent or mitigate any adverse effects. 

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” he said. 

“Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”

Concerns over AI's growth

The move from the NTIA follows mounting concerns about the potential impact of generative AI systems such as ChatGPT

The rapid advent of generative AI products has prompted a degree of hesitancy among lawmakers on both sides of the Atlantic. 

In late March, Italy announced a shock ‘ban’ on ChatGPT amid data privacy concerns. 

The Italian data protection authority voiced serious concerns about the generative AI model and said it plans to investigate OpenAI “with immediate effect”. 

RELATED RESOURCES

Whitepaper cover with image of female colleague using a tablet

(Image credit: AWS)

The three keys to successful AI and ML outcomes

Leverage the full power of artificial intelligence

DOWNLOAD FOR FREE

Lawmakers elsewhere in Europe are also thought to be exploring a potential crackdown on AI systems, with German authorities among those cited as having serious concerns. 

While lingering worries over generative AI products such as ChatGPT continue, some industry analysts described the Italian decision as an “overreaction”, saying that such crackdowns could have negative long-term implications for companies in the country exploring the use of AI. 

Andy Patel, researcher at WithSecure, told ITPro that Italy’s decision had essentially “cut off” one of the most transformative tools currently available to businesses and individuals. 

Industry stakeholders have also voiced a growing discontent over the speed of generative AI development. 

Around the time of Italy’s ChatGPT decision, an open letter penned by tech industry figures including Elon Musk called for an immediate halt to “out of control” AI development. 

The controversial letter demanded a six-month pause be imposed on companies building generative AI models and argued that there is a concerning lack of corporate and regulatory safeguards currently in place to moderate generative AI development. 

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.