OpenAI says it’s charting a "path to AGI" with its next frontier AI model
OpenAI confirmed it's working on a new frontier model alongside the launch of a new AI safety committee
OpenAI has revealed that it recently started work on training its next frontier large language model (LLM).
The first version of OpenAI’s ChatGPT debuted back in November 2022 and became an unexpected breakthrough hit which launched generative AI into public consciousness.
Since then, there have been a number of updates to the underlying model. The first version of ChatGPT was built on GPT-3.5 which finished training in early 2022., while GPT-4 arrived in March 2023. The most recent, GPT-4o, arrived in April this year.
Now OpenAI is working on a new LLM and said it anticipates the system to “bring us to the next level of capabilities on our path to [artificial general intelligence] AGI.”
AGI is a hotly contested concept whereby an AI would – like humans – be good at adapting to many different tasks, including ones it has never been trained on, rather than being designed for one particular use.
AI researchers are split on whether AGI could ever exist or whether the search for it may even be based on a misunderstanding of how intelligence works.
OpenAI provided no details of what the next model might do, but as its LLMs have evolved, the capabilities of the underlying models have expanded.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
While GPT-3 could only deal with text, GPT-4 is able to accept images as well, while GPT-4o has been optimized for voice communication. Context windows have also increased markedly with each interaction, although the size of the models and technical details still remain secret.
Sam Altman, CEO at OpenAI, has stated that GPT-4 cost more than $100 million to train, per Wired, and the model is rumored to have more than one trillion parameters. This would make it one of, if not the biggest, LLM currently in existence.
That doesn’t necessarily mean the next model will be even larger; Altman has previously suggested the race for ever bigger models may be coming to an end.
Smaller models working together might be a more useful way of using generative AI, he has said.
And even if OpenAI has started training its next model, don’t expect to see the impact of it very soon. Training models can take many months and that can be just the first step. It took six months of testing after training was finished before OpenAI released GPT-4.
New OpenAI safety committee given the green light
The company also said it will create a new ‘Safety and Security Committee’ led by OpenAI directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and Altman. This committee will be responsible for making recommendations to the board on critical safety and security decisions for OpenAI projects and operations.
One of its first tasks will be to evaluate and develop OpenAI’s processes and safeguards over the next 90 days. After that the committee will share their recommendations with the board.
Some may raise eyebrows at the safety committee being made up of members of existing OpenAI’s board.
Dr Ilia Kolochenko, CEO at ImmuniWeb and adjunct professor of cyber security at Capital Technology University, questioned whether the move will actually deliver positive outcomes as far as AI safety is concerned.
RELATED WHITEPAPER
“Being safe does not necessarily imply being accurate, reliable, fair, transparent, explainable and non-discriminative – the absolutely crucial characteristics of GenAI solutions,” Kolochenko said. “In view of the past turbulence at OpenAI, I am not sure that the new committee will make a radical improvement.”
The launch of the safety committee comes amidst greater calls for more rigorous regulation and oversight of LLM development. Most recently, a former OpenAI board member has argued that self-governance isn’t the right approach for AI firms and has argued that a strong regulatory framework is needed.
OpenAI has made public efforts to calm AI safety fears in recent months. It was among a host of major industry players to sign up to a safe development pledge at the Seoul AI Summit that could see them pull the plug on their own models if they cannot be built or deployed safely.
But these commitments are voluntary and come with plenty of caveats, leading some experts to call for stronger legislation and requirements for tougher testing of LLMs.
Because of the potentially large risks associated with the technology, AI companies should be subject to a similar regulatory framework as pharmaceuticals companies, critics argue, where companies have to hit standards set by regulators who can make the final decision on if and when a product can be released.
Steve Ranger is an award-winning reporter and editor who writes about technology and business. Previously he was the editorial director at ZDNET and the editor of silicon.com.
-
Why Dedicated Internet Access (DIA) could be the key to AI performance gainsHigh speed, private internet connections could be a critical enabler for enterprises driving AI adoption
-
Dropbox is adding a range of handy new AI features – here’s what users can expectNews Long-awaited features from Dash AI will be integrated within Dropbox
-
'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowNews Despite the hype surrounding agentic AI, OpenAI co-founder Andrej Karpathy isn't convinced and says there's still a long way to go until the tech delivers real benefits.
-
OpenAI signs another chip deal, this time with AMDnews AMD deal is worth billions, and follows a similar partnership with Nvidia last month
-
OpenAI signs series of AI data center deals with SamsungNews As part of its Stargate initiative, the firm plans to ramp up its chip purchases and build new data centers in Korea
-
Why Nvidia’s $100 billion deal with OpenAI is a win-win for both companiesNews OpenAI will use Nvidia chips to build massive systems to train AI
-
OpenAI just revealed what people really use ChatGPT for – and 70% of queries have nothing to do with workNews More than 70% of ChatGPT queries have nothing to do with work, but are personal questions or requests for help with writing.
-
Is the honeymoon period over for Microsoft and OpenAI? Strained relations and deals with competitors spell trouble for the partnership that transformed the AI industryAnalysis Microsoft and OpenAI are slowly drifting apart as both forge closer ties with respective rivals and reevaluate their long-running partnership.
-
OpenAI thought it hit a home run with GPT-5 – users weren't so keenNews It’s been a tough week for OpenAI after facing criticism from users and researchers
-
Three things we expect to see at OpenAI’s GPT-5 reveal eventAnalysis Improved code generation and streamlined model offerings are core concerns for OpenAI
