Former OpenAI exec claims ‘toxic atmosphere’ and ‘outright lying’ prompted Altman coup
OpenAI CEO Sam Altman wasn’t truthful with the board on “multiple occasions”, prompting backroom discussions over his dismissal as early as October last year
A former OpenAI board member has claimed senior staff complained about a toxic work environment in the lead up to Sam Altman’s ousting in November last year.
Speaking on The Ted AI Show podcast, Helen Toner said a motivating factor behind the boardroom coup was that two OpenAI executives reported instances of “psychological abuse” to the board.
Toner claimed Altman was responsible for fostering a “toxic atmosphere” at the tech giant and that complainants provided evidence to the board.
“The two of them suddenly started telling us about their experiences with Sam – which they hadn’t felt comfortable sharing before – but telling us how they couldn’t trust him about the toxic atmosphere he was creating, they used the phrase ‘psychological abuse’, that they didn’t think he was the right person to lead the company to AGI,” Toner told host Bilawal Sidhu.
“...Telling us they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues.”
Toner noted that the aforementioned executives have “since tried to minimize” what they disclosed. However, she added these were “not casual conversations”.
“They were really serious, to the point where they sent us screenshots and documentation of some of the instances they were telling us about”.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Altman ousting was brewing at OpenAI for some time
Toner suggested these issues were one of a number of reasons for the boardroom coup, as lingering concerns over Altman’s conduct and truthfulness with colleagues had already prompted initial discussions over his dismissal in late October.
When Altman was initially ousted, the reasoning behind this move was that he had not been “ consistently candid” in his communications with the board.
Several days of disruption and chaos at the firm ensued which saw Microsoft hire Altman to lead its new AI research division. This was followed by a staff revolt in which workers signed a petition demanding his immediate reinstatement.
Within a matter of days, Altman made a triumphant return to the company.
RELATED WHITEPAPER
Toner appeared steadfast in the belief that the coup was the correct decision at the time given repeated instances in which Altman hadn’t been truthful with the board.
In one example, she noted that when ChatGPT launched in November 2022 the board was “not informed in advance” and learned of the launch via X.
“For years, Sam had made it really difficult for the board to actually do [its] job by withholding information, misrepresenting things that were happening at the company, and in some cases outright lying to the board,” Toner said.
AI safety was also a key friction point within the firm, with Altman accused of giving inaccurate information about the company’s practices on this front.
“On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company had in place,” she said.
“Meaning that it was basically impossible for the board to know how well those safety processes were working.”
OpenAI refuted Toner’s comments, telling Reuters that a subsequent review into the November 2023 incident showed the board’s motivation was not based on any of the aforementioned factors.
The statement was based on comments given by OpenAI board chair Bret Taylor to the podcast.
“We are disappointed that Miss Toner continues to revisit these issues,” the statement read.
“The review concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowNews Despite the hype surrounding agentic AI, OpenAI co-founder Andrej Karpathy isn't convinced and says there's still a long way to go until the tech delivers real benefits.
-
OpenAI signs another chip deal, this time with AMDnews AMD deal is worth billions, and follows a similar partnership with Nvidia last month
-
OpenAI signs series of AI data center deals with SamsungNews As part of its Stargate initiative, the firm plans to ramp up its chip purchases and build new data centers in Korea
-
Why Nvidia’s $100 billion deal with OpenAI is a win-win for both companiesNews OpenAI will use Nvidia chips to build massive systems to train AI
-
OpenAI just revealed what people really use ChatGPT for – and 70% of queries have nothing to do with workNews More than 70% of ChatGPT queries have nothing to do with work, but are personal questions or requests for help with writing.
-
Is the honeymoon period over for Microsoft and OpenAI? Strained relations and deals with competitors spell trouble for the partnership that transformed the AI industryAnalysis Microsoft and OpenAI are slowly drifting apart as both forge closer ties with respective rivals and reevaluate their long-running partnership.
-
OpenAI thought it hit a home run with GPT-5 – users weren't so keenNews It’s been a tough week for OpenAI after facing criticism from users and researchers
-
Three things we expect to see at OpenAI’s GPT-5 reveal eventAnalysis Improved code generation and streamlined model offerings are core concerns for OpenAI

