Apple staff restricted from using ChatGPT, GitHub Copilot
The ban follows lingering concerns that employees using ChatGPT might leak company information
Apple employees have been restricted from using generative AI tools such as ChatGPT amid concerns that company information might be leaked or exposed.
A report from the Wall Street Journal revealed that the blanket ban is in direct response to worries that employees might input confidential data into the popular AI chatbot.
The WSJ report added that Apple is currently in the process of building its own internal generative AI toolset to support staff.
The restriction is thought to apply to a raft of external AI tools, including GitHub’s AI Copilot platform, which is used by developers at a host of major tech companies.
Apple’s ban on the use of ChatGPT and generative AI tools isn’t without justification. In March, OpenAI revealed that a bug in ChatGPT led to a leak of user data.
The flaw meant that ChatGPT Plus users began seeing user email addresses, subscriber names, payment addresses, and limited credit card information.
This incident prompted OpenAI to temporarily take the chatbot down to work on a fix.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The glitch in ChatGPT also allowed some users to view the conversation history of others.
This incident led to heightened concerns over the use of the chatbot in workplace environments, with organizations noting that employee use of confidential company information on the platform could be at risk.
What can OpenAI access?
User conversations in ChatGPT can be inspected by OpenAI moderators in certain circumstances.
The company recently introduced a feature that enables users to turn off their chat history. However, OpenAI still stores conversations for up to 30 days before deleting them.
A key concern among businesses has been that OpenAI models are, in part, trained on user inputs, meaning that there is a potential risk that confidential information could be accessed or used in the training of models.
RELATED RESOURCE
The truth about cyber security training
Stop ticking boxes. Start delivering real change.
OpenAI has been vocal on this issue, revealing in April that it was working on a new ChatGPT Business subscription that would give enterprises better ways to “manage their end users”.
“ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default,” the company said in an April statement.
A host of cyber security companies are currently developing tools primarily aimed at supporting businesses to reduce the risk of data leakage when using platforms such as ChatGPT.
Security firm ExtraHop has recently unveiled a tool that enables companies to determine what staff are inadvertently leaking confidential data when using generative AI tools.
ExtraHop said the new tool will help organizations “understand their risk exposure” from internal use of generative AI tools and ”stop data hemorrhaging in its tracks”.
ChatGPT bans
Apple isn’t alone in limiting the use of ChatGPT and generative AI tools for employees. In recent months, a host of major organizations globally have implemented similar policies to mitigate potential risks.
In February, JPMorgan Chase announced a temporary ban on ChatGPT for employees. At the time, the bank revealed that the reasoning behind the restriction was due to its policies on the use of third-party software.
Amazon is one of a number of others to have prevented employees from inputting confidential information into ChatGPT, along with US telco giant Verizon.
Perhaps most famously, Italy implemented a temporary ban on the technology and was one of the first that catalyzed a wave of bans that continues today

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
What is Microsoft Maia?Explainer Microsoft's in-house chip is planned to a core aspect of Microsoft Copilot and future Azure AI offerings
-
If Satya Nadella wants us to take AI seriously, let’s forget about mass adoption and start with a return on investment for those already using itOpinion If Satya Nadella wants us to take AI seriously, let's start with ROI for businesses
-
DeepSeek rocked Silicon Valley in January 2025 – one year on it looks set to shake things up again with a powerful new model releaseAnalysis The Chinese AI company sent Silicon Valley into meltdown last year and it could rock the boat again with an upcoming model
-
OpenAI says prompt injection attacks are a serious threat for AI browsers – and it’s a problem that’s ‘unlikely to ever be fully solved'News OpenAI details efforts to protect ChatGPT Atlas against prompt injection attacks
-
OpenAI says GPT-5.2-Codex is its ‘most advanced agentic coding model yet’ – here’s what developers and cyber teams can expectNews GPT-5.2 Codex is available immediately for paid ChatGPT users and API access will be rolled out in “coming weeks”
-
OpenAI turns to red teamers to prevent malicious ChatGPT use as company warns future models could pose 'high' security riskNews The ChatGPT maker wants to keep defenders ahead of attackers when it comes to AI security tools
-
Some of the most popular open weight AI models show ‘profound susceptibility’ to jailbreak techniquesNews Open weight AI models from Meta, OpenAI, Google, and Mistral all showed serious flaws
-
'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowNews Despite the hype surrounding agentic AI, OpenAI co-founder Andrej Karpathy isn't convinced and says there's still a long way to go until the tech delivers real benefits.
-
OpenAI signs another chip deal, this time with AMDnews AMD deal is worth billions, and follows a similar partnership with Nvidia last month
-
OpenAI signs series of AI data center deals with SamsungNews As part of its Stargate initiative, the firm plans to ramp up its chip purchases and build new data centers in Korea