Apple staff restricted from using ChatGPT, GitHub Copilot

The ChatGPT and OpenAI logo on a smartphone in front of stocks and shares data on a big screen
(Image credit: Getty Images)

Apple employees have been restricted from using generative AI tools such as ChatGPT amid concerns that company information might be leaked or exposed. 

A report from the Wall Street Journal revealed that the blanket ban is in direct response to worries that employees might input confidential data into the popular AI chatbot. 

The WSJ report added that Apple is currently in the process of building its own internal generative AI toolset to support staff. 

The restriction is thought to apply to a raft of external AI tools, including GitHub’s AI Copilot platform, which is used by developers at a host of major tech companies. 

Apple’s ban on the use of ChatGPT and generative AI tools isn’t without justification. In March, OpenAI revealed that a bug in ChatGPT led to a leak of user data

The flaw meant that ChatGPT Plus users began seeing user email addresses, subscriber names, payment addresses, and limited credit card information. 

This incident prompted OpenAI to temporarily take the chatbot down to work on a fix. 

The glitch in ChatGPT also allowed some users to view the conversation history of others. 

This incident led to heightened concerns over the use of the chatbot in workplace environments, with organizations noting that employee use of confidential company information on the platform could be at risk. 

What can OpenAI access?

User conversations in ChatGPT can be inspected by OpenAI moderators in certain circumstances. 

The company recently introduced a feature that enables users to turn off their chat history. However, OpenAI still stores conversations for up to 30 days before deleting them. 

A key concern among businesses has been that OpenAI models are, in part, trained on user inputs, meaning that there is a potential risk that confidential information could be accessed or used in the training of models. 

RELATED RESOURCE

Purple whitepaper cover with image of laptop on a desk and a pair of fee with socks in the air

(Image credit: Mimecast)

The truth about cyber security training

Stop ticking boxes. Start delivering real change.

DOWNLOAD FOR FREE

OpenAI has been vocal on this issue, revealing in April that it was working on a new ChatGPT Business subscription that would give enterprises better ways to “manage their end users”. 

ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default,” the company said in an April statement.

A host of cyber security companies are currently developing tools primarily aimed at supporting businesses to reduce the risk of data leakage when using platforms such as ChatGPT. 

Security firm ExtraHop has recently unveiled a tool that enables companies to determine what staff are inadvertently leaking confidential data when using generative AI tools.  

ExtraHop said the new tool will help organizations “understand their risk exposure” from internal use of generative AI tools and ”stop data hemorrhaging in its tracks”. 

ChatGPT bans

Apple isn’t alone in limiting the use of ChatGPT and generative AI tools for employees. In recent months, a host of major organizations globally have implemented similar policies to mitigate potential risks. 

In February, JPMorgan Chase announced a temporary ban on ChatGPT for employees. At the time, the bank revealed that the reasoning behind the restriction was due to its policies on the use of third-party software. 

Amazon is one of a number of others to have prevented employees from inputting confidential information into ChatGPT, along with US telco giant Verizon. 

Perhaps most famously, Italy implemented a temporary ban on the technology and was one of the first that catalyzed a wave of bans that continues today

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.