CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do that
The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
Security experts have warned about the dangers of ‘shadow AI’ after the interim chief of the Cybersecurity and Infrastructure Security Agency (CISA) uploaded sensitive documents to a public version of ChatGPT.
According to reports from Politico, Madhu Gottumukkala uploaded documents to the popular AI tool despite it being blocked for other DHS employees at the time.
The incident prompted an “internal review” after triggering security features designed to prevent theft or mistaken disclosure of government materials, sources told the publication.
While this review found none of the files uploaded to the chatbot were classified, they did include CISA contracting documents marked “for official use only”.
CISA’s director of public affairs, Marci McCarthy, told Politico that Gottumukkala was granted permission to use ChatGPT “with DHS controls in place,” adding that use of the chatbot was “short-term and limited”.
Alastair Paterson, CEO and co-founder at Harmonic Security, told ITPro that the incident is concerning and highlights the potential for AI-related blunders even in tightly controlled environments.
“It’s obviously embarrassing but not without precedent,” he said. “When DeepSeek launched it was reported that staff at the Pentagon used it for days before it was blocked. It shows that any organization is susceptible to the risk of shadow AI and the need for urgent controls.”
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Shadow AI is becoming a serious problem
Despite there being no serious ramifications in the wake of the blunder, security experts nonetheless told ITPro that uploading sensitive information to unauthorized tools can pose serious risks to organizations.
This is an issue that has grown significantly over the last three years, with the increased use of AI in the enterprise giving rise to concerns about ‘shadow AI’.
This refers to the use of AI tools that haven’t been expressly authorized by security teams, meaning they can’t keep track of documents or potentially sensitive corporate information uploaded to the chatbots.
A recent survey from BlackFog, for example, found that nearly half (49%) of workers admitted to having used AI tools in the workplace without approval, often sharing sensitive data with free versions of popular solutions like ChatGPT.
Carl Wearn, head of threat intelligence analysis and future ops at Mimecast, told ITPro that the shadow AI practices have serious long-term implications for data protection and cybersecurity.
“The risk itself is straightforward but serious. Once contracts, emails, or internal documents are entered into a public AI model, control is lost in seconds,” he said.
“There’s often no clear way to retrieve that data or understand where it may surface next.”
In most cases, Wearn noted that there’s “no malicious intent” and workers are simply being careless. This is, at least in part, due to demands placed on workers “in the name of efficiency”.
“Shadow AI is accelerating because people are under pressure to move faster,
not because they are careless,” he told ITPro.
Leadership needs to take responsibility
While these activities are prevalent among workers, research shows leadership figures are also engaging in - and even encouraging - dangerous shadow AI practices.
BlackFog’s survey found more than two thirds (69%) of C-suite members said they’re happy to prioritize speed over privacy in many cases, signaling somewhat of a seal of approval for workers to engage in risky practices to speed up processes.
Wearn noted that management figures should lead by example on this front by engaging with staff to set clear guidelines.
“Leadership, accountability, and clarity now matter just as much as the technology itself,” he said. “Organizations that establish governance frameworks today will be better positioned to harness AI’s benefits while protecting what matters most.”
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Former Google engineer convicted of economic espionage after stealing thousands of secret AI, supercomputing documentsNews Linwei Ding told Chinese investors he could build a world-class supercomputer
-
AI is “forcing a fundamental shift” in data privacy and governanceNews Organizations are working to define and establish the governance structures they need to manage AI responsibly at scale – and budgets are going up
-
90% of companies are woefully unprepared for quantum security threats – analysts say they need to get a move onNews Quantum security threats are coming, but a Bain & Company survey shows systems aren't yet in place to prevent widespread chaos
-
LastPass issues alert as customers targeted in new phishing campaignNews LastPass has urged customers to be on the alert for phishing emails amidst an ongoing scam campaign that encourages users to backup vaults.
-
NCSC names and shames pro-Russia hacktivist group amid escalating DDoS attacks on UK public servicesNews Russia-linked hacktivists are increasingly trying to cause chaos for UK organizations
-
An AWS CodeBuild vulnerability could’ve caused supply chain chaos – luckily a fix was applied before disaster struckNews A single misconfiguration could have allowed attackers to inject malicious code to launch a platform-wide compromise
-
There’s a dangerous new ransomware variant on the block – and cyber experts warn it’s flying under the radarNews The new DeadLock ransomware family is taking off in the wild, researchers warn
-
Supply chain and AI security in the spotlight for cyber leaders in 2026News Organizations are sharpening their focus on supply chain security and shoring up AI systems
