CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do that

The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks

Insignia of the Cybersecurity and Infrastructure Security Agency (CISA) pictured on a smartphone screen.
(Image credit: Getty Images)

Security experts have warned about the dangers of ‘shadow AI’ after the interim chief of the Cybersecurity and Infrastructure Security Agency (CISA) uploaded sensitive documents to a public version of ChatGPT.

According to reports from Politico, Madhu Gottumukkala uploaded documents to the popular AI tool despite it being blocked for other DHS employees at the time.

The incident prompted an “internal review” after triggering security features designed to prevent theft or mistaken disclosure of government materials, sources told the publication.

While this review found none of the files uploaded to the chatbot were classified, they did include CISA contracting documents marked “for official use only”.

CISA’s director of public affairs, Marci McCarthy, told Politico that Gottumukkala was granted permission to use ChatGPT “with DHS controls in place,” adding that use of the chatbot was “short-term and limited”.

Alastair Paterson, CEO and co-founder at Harmonic Security, told ITPro that the incident is concerning and highlights the potential for AI-related blunders even in tightly controlled environments.

“It’s obviously embarrassing but not without precedent,” he said. “When DeepSeek launched it was reported that staff at the Pentagon used it for days before it was blocked. It shows that any organization is susceptible to the risk of shadow AI and the need for urgent controls.”

Shadow AI is becoming a serious problem

Despite there being no serious ramifications in the wake of the blunder, security experts nonetheless told ITPro that uploading sensitive information to unauthorized tools can pose serious risks to organizations.

This is an issue that has grown significantly over the last three years, with the increased use of AI in the enterprise giving rise to concerns about ‘shadow AI’.

This refers to the use of AI tools that haven’t been expressly authorized by security teams, meaning they can’t keep track of documents or potentially sensitive corporate information uploaded to the chatbots.

A recent survey from BlackFog, for example, found that nearly half (49%) of workers admitted to having used AI tools in the workplace without approval, often sharing sensitive data with free versions of popular solutions like ChatGPT.

Carl Wearn, head of threat intelligence analysis and future ops at Mimecast, told ITPro that the shadow AI practices have serious long-term implications for data protection and cybersecurity.

“The risk itself is straightforward but serious. Once contracts, emails, or internal documents are entered into a public AI model, control is lost in seconds,” he said.

“There’s often no clear way to retrieve that data or understand where it may surface next.”

In most cases, Wearn noted that there’s “no malicious intent” and workers are simply being careless. This is, at least in part, due to demands placed on workers “in the name of efficiency”.

“Shadow AI is accelerating because people are under pressure to move faster,

not because they are careless,” he told ITPro.

Leadership needs to take responsibility

While these activities are prevalent among workers, research shows leadership figures are also engaging in - and even encouraging - dangerous shadow AI practices.

BlackFog’s survey found more than two thirds (69%) of C-suite members said they’re happy to prioritize speed over privacy in many cases, signaling somewhat of a seal of approval for workers to engage in risky practices to speed up processes.

Wearn noted that management figures should lead by example on this front by engaging with staff to set clear guidelines.

“Leadership, accountability, and clarity now matter just as much as the technology itself,” he said. “Organizations that establish governance frameworks today will be better positioned to harness AI’s benefits while protecting what matters most.”

FOLLOW US ON SOCIAL MEDIA

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.