OpenAI hailed for ‘swift move’ in terminating Mixpanel ties after data breach hits developers
The Mixpanel breach prompted OpenAI to launch a review into its broader supplier ecosystem
OpenAI has admitted a security breach at a third-party supplier exposed customer emails, location information, and “limited analytics data related to some users of the API”.
The supplier, Mixpanel, provides data analytics services via OpenAI’s developer platform. OpenAI said the platform is used to help “understand product usage” and improve services for its API product, platform.openai.com.
On 9 November, Mixpanel discovered an attacker gained unauthorized access to systems. They then exfiltrated a dataset containing “limited customer identifiable information and analytics information”.
A full outline of data exposed, per an OpenAI statement on the breach, includes:
- Names provided via Mixpanel API accounts
- Email addresses associated with the API account
- “Aproximate course location based on API user browsers” (including city, state, and country)
- Information on operating systems and browsers used to access the API account
- Referring websites associated with the API account
OpenAI has been keen to stress that the breach only affects developers and not general ChatGPT users. It also said developer credentials – including passwords, payment information, and government IDs – weren’t exposed.
OpenAI added that it’s currently in the process of notifying those affected by the incident.
A swift response from OpenAI
Upon discovery of the breach, OpenAI said it removed Mixpanel from production services and began a review of affected datasets.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
While the investigation is still ongoing, the company noted it has so far found “no evidence of any effect on systems or data outside Mixpanel’s environment”.
The company has since terminated its use of the data analytics platform and said it will conduct a review of its broader supplier ecosystem.
“Trust, security, and privacy are foundational to our products, our organisation, and our mission, OpenAI said in a statement. “We also hold our partners and vendors accountable for the highest bar for security and privacy of their services.”
Jake Moore, global cybersecurity advisor at ESET, commended OpenAI for its “swift move” in alerting users and cutting ties with the supplier. Many organizations try to minimize security incidents and keep them “under the radar”, he said.
“Companies often fear the aftermath of an attack and presume it will be brand damaging,” Moore commented. “However, openness is now deemed far more important and speed is usually of the essence in making anyone affected aware of the situation.”
Developers warned to remain vigilant
OpenAI said information exposed in the breach could be used by hackers to carry out future attacks on users and encouraged them to “remain vigilant”.
These types of warnings are common in the wake of a data breach, according to Moore.
“Even though the exposed data was low-sensitivity, it could still be misused in the likes of social engineering techniques or via phishing attacks because attackers could combine the data such as name, email, even approximate location data to craft convincing fraudulent messages,” he explained.
“As within the wake of typical data compromises, those affected need to remain vigilant for suspicious emails or other strange communications.”
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- Gartner says 40% of enterprises will experience ‘shadow AI’ breaches by 2030
- AI breaches aren’t just a scare story any more – they’re happening in real life
- Impact of Asahi cyber attack laid bare as company confirms 1.5 million customers exposed

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Making the case for open source AI adoptionAnalysis Open source AI models often perform on-par with closed source options and could save enterprises billions in cost savings, new research suggests, yet uptake remains limited.
-
How AWS can show its mettle at re:Invent 2025Analysis The hyperscaler will be betting big on its AI stack and infrastructure credentials
-
Cyber researchers have already identified several big security vulnerabilities on OpenAI’s Atlas browserNews Security researchers have uncovered a Cross-Site Request Forgery (CSRF) attack and a prompt injection technique
-
Teens arrested over nursery chain Kido hacknews The ransom attack caused widespread shock when the hackers published children's personal data
-
Red Hat reveals unauthorized access to a GitLab instance where internal data was copiedNews Crimson Collective has claimed the attack, saying it has accessed more than 28,000 Red Hat repositories
-
Google warns executives are being targeted for extortion with leaked Oracle dataNews Extortion emails being sent to executives at large organisations appear to show evidence of a breach involving Oracle's E-Business Suite
-
Harrods rejects contact with hackers, after 430,000 customer records stolen from third-party providerNews The luxury department store has denied any link to a failed attack on its systems in May
-
Kido nursery hackers threaten to release more details – along with the personal data of 100 employeesNews The attack is the first to be claimed by the new threat group 'Radiant'
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b modelNews Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
LevelBlue and Akamai are teaming up to launch a managed web application and API protection serviceNews The new Managed WAAP offering aims to help organizations secure their rapidly expanding web app and API ecosystems