CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do that
The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Security experts have warned about the dangers of ‘shadow AI’ after the interim chief of the Cybersecurity and Infrastructure Security Agency (CISA) uploaded sensitive documents to a public version of ChatGPT.
According to reports from Politico, Madhu Gottumukkala uploaded documents to the popular AI tool despite it being blocked for other DHS employees at the time.
The incident prompted an “internal review” after triggering security features designed to prevent theft or mistaken disclosure of government materials, sources told the publication.
While this review found none of the files uploaded to the chatbot were classified, they did include CISA contracting documents marked “for official use only”.
CISA’s director of public affairs, Marci McCarthy, told Politico that Gottumukkala was granted permission to use ChatGPT “with DHS controls in place,” adding that use of the chatbot was “short-term and limited”.
Alastair Paterson, CEO and co-founder at Harmonic Security, told ITPro that the incident is concerning and highlights the potential for AI-related blunders even in tightly controlled environments.
“It’s obviously embarrassing but not without precedent,” he said. “When DeepSeek launched it was reported that staff at the Pentagon used it for days before it was blocked. It shows that any organization is susceptible to the risk of shadow AI and the need for urgent controls.”
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Shadow AI is becoming a serious problem
Despite there being no serious ramifications in the wake of the blunder, security experts nonetheless told ITPro that uploading sensitive information to unauthorized tools can pose serious risks to organizations.
This is an issue that has grown significantly over the last three years, with the increased use of AI in the enterprise giving rise to concerns about ‘shadow AI’.
This refers to the use of AI tools that haven’t been expressly authorized by security teams, meaning they can’t keep track of documents or potentially sensitive corporate information uploaded to the chatbots.
A recent survey from BlackFog, for example, found that nearly half (49%) of workers admitted to having used AI tools in the workplace without approval, often sharing sensitive data with free versions of popular solutions like ChatGPT.
Carl Wearn, head of threat intelligence analysis and future ops at Mimecast, told ITPro that the shadow AI practices have serious long-term implications for data protection and cybersecurity.
“The risk itself is straightforward but serious. Once contracts, emails, or internal documents are entered into a public AI model, control is lost in seconds,” he said.
“There’s often no clear way to retrieve that data or understand where it may surface next.”
In most cases, Wearn noted that there’s “no malicious intent” and workers are simply being careless. This is, at least in part, due to demands placed on workers “in the name of efficiency”.
“Shadow AI is accelerating because people are under pressure to move faster,
not because they are careless,” he told ITPro.
Leadership needs to take responsibility
While these activities are prevalent among workers, research shows leadership figures are also engaging in - and even encouraging - dangerous shadow AI practices.
BlackFog’s survey found more than two thirds (69%) of C-suite members said they’re happy to prioritize speed over privacy in many cases, signaling somewhat of a seal of approval for workers to engage in risky practices to speed up processes.
Wearn noted that management figures should lead by example on this front by engaging with staff to set clear guidelines.
“Leadership, accountability, and clarity now matter just as much as the technology itself,” he said. “Organizations that establish governance frameworks today will be better positioned to harness AI’s benefits while protecting what matters most.”
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 – and some attacks take just secondsNews Cyber criminals are actively exploiting AI systems and injecting malicious prompts into legitimate generative AI tools
-
Using AI to generate passwords is a terrible idea, experts warnNews Researchers have warned the use of AI-generated passwords puts users and businesses at risk
-
Researchers called on LastPass, Dashlane, and Bitwarden to up defenses after severe flaws put 60 million users at risk – here’s how each company respondedNews Analysts at ETH Zurich called for cryptographic standard improvements after a host of password managers were found lacking
-
Harnessing AI to secure the future of identityIndustry Insights Channel partners must lead on securing AI identities through governance and support
-
‘They are able to move fast now’: AI is expanding attack surfaces – and hackers are looking to reap the same rewards as enterprises with the technologyNews Potent new malware strains, faster attack times, and the rise of shadow AI are causing havoc
-
Ransomware gangs are using employee monitoring software as a springboard for cyber attacksNews Two attempted attacks aimed to exploit Net Monitor for Employees Professional and SimpleHelp
-
CISA shares lessons learned from Polish power grid hack – and how to prevent disaster striking againNews New CISA guidance aims to help CNI operators implement secure communications
-
Notepad++ hackers remained undetected and pushed malicious updates for six months – here’s who’s responsible, how they did it, and how to check if you’ve been affectedNews Hackers remained undetected for months and distributed malicious updates to Notepad++ users after breaching the text editor software – here's how to check if you've been affected.

