OpenAI's former chief scientist just raised $1bn for a new firm aimed at developing responsible AI
AI researcher Ilya Sutskever is the latest former OpenAI staffer to start their own firm focused on AI safety
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
OpenAI's former chief scientist has raised $1 billion for his new firm to develop safe artificial intelligence systems.
Ilya Sutskever co-founded Safe Superintelligence (SSI) in June after departing OpenAI in May following a failed attempt to oust CEO Sam Altman in November 2023, which was initially backed by Sutskever.
Reports suggest that the investment values SSI at $5 billion. The investors include Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, SSI said, as well as NFDG, which is co-run by Daniel Gross, an SSI co-founder and the CEO.
So far, SSI has ten employees in Palo Alto, California and Tel Aviv, Israel. According to Reuters, SSI plans to spend the investment hiring top AI engineers and researchers, as well as on the necessary processing power. Both staff and computing are costly when it comes to developing AI.
Sutskever initially backed the efforts to oust Altman, which appeared to largely focus on the tension between AI safety and shipping usable AI products. However, amidst the chaos that ensued at the AI giant he swiftly u-turned,
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI,” Sutskever said in a statement posted to X.
Sutskever announced his departure from OpenAI in May, saying at the time he was "confident that OpenAI will build AGI that is both safe and beneficial".
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
But weeks later, in mid June, he announced the launch of safety-focused SSI, alongside Gross, who previously worked on AI at Apple, and Daniel Levy, also formerly of OpenAI.
Sutskever previously worked with "godfather of AI" Geoffrey Hinton, who stepped down from Google in May 2023 in order to more openly talk about the risks of artificial general intelligence (AGI) and super-intelligent AI.
SSI isn't the first company to emerge from OpenAI with a focus on safer AI. In 2021, Dario Amodei and his sister Daniela Amodei founded Anthropic to create safer AI after leaving the firm, with both reportedly concerned about the direction of the company.
Safe Superintelligence’s plans
SSI publicized its launch via a single website page with plain text on white background.
RELATED WHITEPAPER
"We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence," the company said at the time.
"We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs," the statement says. "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead."
Gross said in an interview with Reuters not to expect a product for years — a contrast to companies like OpenAI that are pushing out marketable versions of AI to fund wider work on AGI.
"It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross told Reuters.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
AWS CEO Matt Garman isn’t convinced AI spells the end of the software industryNews Software stocks have taken a beating in recent weeks, but AWS CEO Matt Garman has joined Nvidia's Jensen Huang and Databricks CEO Ali Ghodsi in pouring cold water on the AI-fueled hysteria.
-
Deepfake business risks are growingIn-depth As the risk of being targeted by deepfakes increases, what should businesses be looking out for?
-
OpenAI's Codex app is now available on macOS – and it’s free for some ChatGPT users for a limited timeNews OpenAI has rolled out the macOS app to help developers make more use of Codex in their work
-
Amazon’s rumored OpenAI investment points to a “lack of confidence” in Nova model rangeNews The hyperscaler is among a number of firms targeting investment in the company
-
OpenAI admits 'losing access to GPT‑4o will feel frustrating' for users – the company is pushing ahead with retirement plans anwayNews OpenAI has confirmed plans to retire its popular GPT-4o model in February, citing increased uptake of its newer GPT-5 model range.
-
‘In the model race, it still trails’: Meta’s huge AI spending plans show it’s struggling to keep pace with OpenAI and Google – Mark Zuckerberg thinks the launch of agents that ‘really work’ will be the keyNews Meta CEO Mark Zuckerberg promises new models this year "will be good" as the tech giant looks to catch up in the AI race
-
DeepSeek rocked Silicon Valley in January 2025 – one year on it looks set to shake things up again with a powerful new model releaseAnalysis The Chinese AI company sent Silicon Valley into meltdown last year and it could rock the boat again with an upcoming model
-
OpenAI says prompt injection attacks are a serious threat for AI browsers – and it’s a problem that’s ‘unlikely to ever be fully solved'News OpenAI details efforts to protect ChatGPT Atlas against prompt injection attacks
-
OpenAI says GPT-5.2-Codex is its ‘most advanced agentic coding model yet’ – here’s what developers and cyber teams can expectNews GPT-5.2 Codex is available immediately for paid ChatGPT users and API access will be rolled out in “coming weeks”
-
OpenAI turns to red teamers to prevent malicious ChatGPT use as company warns future models could pose 'high' security riskNews The ChatGPT maker wants to keep defenders ahead of attackers when it comes to AI security tools
