Could years of AI conversations be your biggest security blind spot?
Staff conversations with AI tools must be strictly controlled to prevent future attack risks
In October 2024, entrepreneur Tom Morgan sparked a viral trend by tweeting about a "Pretty cool" feature of ChatGPT wherein prompt it to tell you one thing about yourself that you may not have known before.
While individuals may find AI's uncanny personalization and insights entertaining, this reveals a much deeper problem: every seemingly benign conversation with ChatGPT, Claude and other large language models (LLMs) can be accumulated to form a detailed profile of your thoughts, habits, and sensitive data.
While individuals may find AI's uncanny personalization and insights entertaining, this reveals a much deeper problem: every seemingly benign conversation with ChatGPT, Claude and other public large language models (LLMs) can be accumulated to form a detailed profile of your thoughts, habits, and sensitive data.
OpenAI, Anthropic, Google and Microsoft all retain not only user prompts by default, but also generated outputs, metadata and continuous feedback loops to refine their models.
While this is concerning for individual users' privacy, the implications for businesses are far more serious. Shadow AI use is currently booming, with workers at over 90% of companies using chatbots to perform automated tasks versus just 40% of companies recording LLM subscriptions, according to MIT's Project NANDA State of AI in Business 2025 report.
This typically involves employees bypassing sanctioned tools in favor of more convenient consumer AI alternatives like ChatGPT. This can vastly expand the attack surface for businesses, as sensitive corporate data can routinely be shared in daily workflows.
Naturally, major players like OpenAI seek to assure users that they take privacy seriously, for instance by encrypting data at rest and offering 'temporary' chats. However these measures are often impractical.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
For instance, in June 2025, a district court ordered OpenAI to retain all ChatGPT logs including deleted chats, as part of a lawsuit brought against the AI firm by the New York Times.
This legal order has now been terminated and OpenAI has recommitted to wiping deleted ChatGPT conversations and temporary chats after 30 days. But the issue raises questions about the potential for interactions between workers and shadow AI tools to come back and bite organizations down the line.
In January, researchers at Wiz also uncovered a vulnerable database operated by DeepSeek that included sensitive information such as chat history, secret keys, and backend details.
AI platforms also don't need a misconfigured database to be vulnerable to leaks. In September 2024, PromptArmor discovered a prompt engineering attack that could force Slack AI to leak sensitive data from private channels,
"Shadow AI isn’t just leaking information, it’s leaking the thought process behind the information,” says ethical hacker Daniel Kelley, in conversation with ITPro. “For cybercriminals, that’s the difference between finding a lost key and being handed the master key."
Weaponizing conversation data
The official tools and infrastructure used in most corporate environments have safeguards to prevent misuse and exploits. These can encompass enterprise-grade logging, network segmentation and strict onboarding procedures.
However, these can count for very little, if, for instance, an accountant uploads a confidential spreadsheet into a third-party LLM or a board member uses AI to craft an internal memo that contains references to trade secrets.
These interactions can seem ephemeral to the end user, but such exchanges can reveal confidential data like work patterns, proprietary information, and even employee health records.
Bad actors exploiting AI to carry out cyberattacks is nothing new. In 2019, hackers contacted the CEO of a UK-based energy company, and used AI to impersonate the voice of the chief executive of its German parent corporation.
In this case the cybercriminals were able to scam $243,000 out of the company before the deception was discovered. Shadow AI is no less damaging and can introduce multiple, unseen vulnerabilities.
Daniel Kelley emphasizes that in a worst case scenario, "Cybercriminals could go beyond stealing data, they could mimic employees, draft flawless spear-phishing emails, clone intellectual property, and exploit insider knowledge. With years of prompts, they’d have the material to manipulate, pressure, or even destabilize a business from the inside out."
The risk with cloud AI services could be profound. Earlier this year Andy Ward, SVP International at Absolute Security, told ITPro that using tools like DeepSeek in the workplace is “as good as printing out and handing over confidential information”.
In August, IBM found 20% of 600 organizations it surveyed had suffered a data breach directly linked to “security incidents involving shadow AI”.
Why AI policies miss the mark
Early reports of shadow AI had companies rushing to impose blanket bans on the technology – for example, Samsung forbade employees from using ChatGPT in 2023 after discovering that proprietary code had been uploaded to the platform.
A recent report by security training company Anagram found 58% of employees admitted to posting sensitive data into AI tools, including client records, financial data, and internal documents. As many as 40% also claimed they would knowingly violate company policies to finish a task quicker.
Recent Microsoft research is even more sobering for UK IT leaders, with 71% of surveyed employees in the region admitting to using unapproved AI tools at work. Over a fifth (22%) revealed they use shadow AI tools for financial tasks.
“Employees are willing to trade compliance for convenience,” wrote Harley Sugarman, founder & CEO at Anagram. “That should be a wake-up call.”
Compliance and governance is a vital concern, given that many data protection regulations like GDPR mandate collecting minimal personal data. However, AI tools incentivize users to provide as much personal information as possible to improve outputs.
The consequences of noncompliance with data protection laws can also be substantial: major infringements like processing data for unlawful purposes can cost companies upwards of €20,000,000 or 4% of the organization’s worldwide revenue in the previous year: whichever is higher.
As Daniel Kelley notes: "Shadow AI sits outside official IT oversight. Unlike ransomware, these breaches can be silent. A firm might not realize its financial forecasts, product roadmaps, or legal strategies are already in an attacker’s dataset until damage is done."
Practical governance frameworks
According to IBM's Cost of Data Breach Report 2025, 63% of organizations lack AI governance initiatives. The same report found that for organizations with high levels of shadow AI, the cost of a data breach can reach up to $670,000.
In a blog, enterprise AI governance specialists Sahiba Pahwa, Anshul Garg, and Andrea Colmenares from IBM identified four critical pillars organizations to maintain compliance and mitigate risks like Shadow AI:
- Lifecycle governance with a centralized inventory to track AI models and usage.
- Proactive risk management to detect and respond promptly to issues.
- Streamlined compliance and ethical oversight to reduce manual processing for relevant teams.
- Security management, to include penetration testing of AI and usage protection to guard against unauthorized use.
This framework mandates both technical controls, such as network monitoring for unauthorized AI usage, and cultural change, which can be managed through regular AI policy training for employees.
Recent advice from Gartner follows along the same lines. At Gartner Security and Risk Management Summit 2025, two Gartner experts advocated harnessing shadow AI as an opportunity for employee innovation provided it was paired with strong controls.
Christine Lee, VP research at Gartner and Leigh McMullen, distinguished VP analyst at Gartner, suggested IT and security leaders could work with staff to identify potential opportunities for staff offered by shadow AI tools, which could then be verified and properly deployed in enterprise environments. At the same time, the two urged CISOs to implement AI runtime controls and strong incident response strategies.
Scaling AI remains a challenge in corporate environments, as each new unmonitored conversation could compromise sensitive business data.
Despite the undoubted advantages of AI and machine learning, organizations that fail to implement strict governance frameworks could witness their strongest productivity tools quickly become a catastrophic security vulnerability.
Nate is a freelance technology writer based in Ireland, and has written for TechRadar and IT Pro Portal on a wide range of cloud and technology topics.
-
Europol hails triple takedown with Rhadamanthys, VenomRAT, and Elysium sting operationsNews The Rhadamanthys infostealer operation is one of the latest victims of Europol's Operation Endgame, with more than a thousand servers taken down
-
Logitech says zero-day attack saw hackers copy 'certain data' from internal IT systemsNews The incident is believed to have formed part of a campaign by the Clop extortion group that targeted customers of Oracle’s E-Business Suite