OpenAI's former chief scientist just raised $1bn for a new firm aimed at developing responsible AI
AI researcher Ilya Sutskever is the latest former OpenAI staffer to start their own firm focused on AI safety
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
OpenAI's former chief scientist has raised $1 billion for his new firm to develop safe artificial intelligence systems.
Ilya Sutskever co-founded Safe Superintelligence (SSI) in June after departing OpenAI in May following a failed attempt to oust CEO Sam Altman in November 2023, which was initially backed by Sutskever.
Reports suggest that the investment values SSI at $5 billion. The investors include Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, SSI said, as well as NFDG, which is co-run by Daniel Gross, an SSI co-founder and the CEO.
So far, SSI has ten employees in Palo Alto, California and Tel Aviv, Israel. According to Reuters, SSI plans to spend the investment hiring top AI engineers and researchers, as well as on the necessary processing power. Both staff and computing are costly when it comes to developing AI.
Sutskever initially backed the efforts to oust Altman, which appeared to largely focus on the tension between AI safety and shipping usable AI products. However, amidst the chaos that ensued at the AI giant he swiftly u-turned,
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI,” Sutskever said in a statement posted to X.
Sutskever announced his departure from OpenAI in May, saying at the time he was "confident that OpenAI will build AGI that is both safe and beneficial".
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
But weeks later, in mid June, he announced the launch of safety-focused SSI, alongside Gross, who previously worked on AI at Apple, and Daniel Levy, also formerly of OpenAI.
Sutskever previously worked with "godfather of AI" Geoffrey Hinton, who stepped down from Google in May 2023 in order to more openly talk about the risks of artificial general intelligence (AGI) and super-intelligent AI.
SSI isn't the first company to emerge from OpenAI with a focus on safer AI. In 2021, Dario Amodei and his sister Daniela Amodei founded Anthropic to create safer AI after leaving the firm, with both reportedly concerned about the direction of the company.
Safe Superintelligence’s plans
SSI publicized its launch via a single website page with plain text on white background.
RELATED WHITEPAPER
"We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence," the company said at the time.
"We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs," the statement says. "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead."
Gross said in an interview with Reuters not to expect a product for years — a contrast to companies like OpenAI that are pushing out marketable versions of AI to fund wider work on AGI.
"It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross told Reuters.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Microsoft has a new AI poster child in Anthropic – and it’s about timeOpinion Microsoft is cosying up to Anthropic at a crucial time in the race to deliver on AI promises
-
Will AI hiring entrench gender bias?ITPro Podcast This International Women's Day, it's more important than ever to consider the inherent biases of training data
-
Why Amazon’s ‘go build it’ AI strategy aligns with OpenAI’s big enterprise pushNews OpenAI and Amazon are both vying to offer customers DIY-style AI development services
-
February rundown: SaaS-pocalypse now?ITPro Podcast Geopolitical uncertainty is intensifying public and private sector focus on true sovereign workloads
-
‘A huge vote of confidence’: London set to host OpenAI's largest research hub outside USNews OpenAI wants to capitalize on the UK’s “world-class” talent in areas such as machine learning
-
Sam Altman just said what everyone is thinking about AI layoffsNews AI layoff claims are overblown and increasingly used as an excuse for “traditional drivers” when implementing job cuts
-
OpenAI's Codex app is now available on macOS – and it’s free for some ChatGPT users for a limited timeNews OpenAI has rolled out the macOS app to help developers make more use of Codex in their work
-
Amazon’s rumored OpenAI investment points to a “lack of confidence” in Nova model rangeNews The hyperscaler is among a number of firms targeting investment in the company


