Amazon Olympus could be the LLM to rival OpenAI and Google
With a reported two trillion parameters, Amazon Olympus would be among the most powerful models available
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Amazon is working on the development of a new large language model (LLM) known as ‘Olympus’ in a bid to topple ChatGPT and Bard, according to reports.
Sources at the company told The Information that the tech giant is working on the LLM and has allocated both resources and staff from its Alexa AI and science teams to spearhead its creation.
Development of the model is being led by Rohit Prasad, former head of Alexa turned lead scientist for artificial general intelligence (AGI)according to Reuters.
Prasad moved into the role to specifically focus on generative AI development as the company seeks to contend with industry competitors such as Microsoft-backed OpenAI and Google.
According to sources, the Amazon Olympus model will have two trillion parameters. If correct, this would make it one of the largest and most powerful models currently in production.
By contrast, OpenAI’s GPT-4, the current market leading model, boasts one trillion parameters.
Olympus could be rolled out as early as December and there is a possibility the model could be used to support retail, Alexa, and AWS operations.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Is Amazon Olympus the successor to Titan?
RELATED RESOURCE
Find out what leading Salesforce customers are doing to deliver enterprise value
DOWNLOAD NOW
Amazon already has its Titan foundation models, which are available for AWS customers as part of its Bedrock framework.
Amazon Bedrock offers customers a variety of foundation models, including models from AI21 Labs and Anthropic, which the tech giant recently backed with a multi-billion-dollar investment.
Amazon Olympus could be the natural evolution of Amazon’s LLM ambitions. Earlier this year, the company revealed it planned to increase investment in the development of LLMs and generative AI tools.
ITPro has approached Amazon for comment.
Amazon Olympus deviates from Bedrock "ethos"
Olympus could be seen as a departure from Amazon’s AI strategy to date, with Amazon Bedrock having set the firm aside from the AI ‘arms race’ of Google and Microsoft by focusing on providing as wide a range of third-party AI models as possible in addition to its powerful Titan foundation models.
A core feature of Bedrock in demonstrations has been the ability to use multiple models across a single process like a production line, with models assigned to constituent tasks that play into their individual strengths.
The firm showed an example of this at its AWS Summit London conference, in which it showed a marketing department using a Titan model to SEO-optimize a product, Anthropic’s Claude to generate the product description, and Stability AI’s StableDiffusion to produce the product image.
If the firm has diverted some of its attention to creating a ‘killer’ AI model, one that will be able to outperform most or all models on its own, one could question where that fits in with Bedrock’s ethos.

Rory Bathgate is Features & Multimedia Editor at ITPro, leading our in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
The reports stating that the model has two trillion parameters don’t stretch believability, and this would almost certainly cement Olympus as the largest LLM on the market at double the reported size of GPT-4.
Whether Olympus’ reportedly mammoth size translates into raw performance remains to be seen. LLMs are complex frameworks shaped as much by their training and fine-tuning as by the scale of the data they access, and experts in the field have been divided for years on whether bigger is better when it comes to generative AI.
For many years, it was thought that there could be a cutoff point for generative AI model size, beyond which the model would lose its specificity and begin providing vague, generalized answers. We have long since broken that barrier, and the success many companies have charted with their models has seemingly proved this sentiment wrong.
But there are also examples of smaller models performing at the same level, or better than, their larger counterparts. Google’s Palm 2, which powers its Bard chatbot, has 340 billion parameters compared to the original PaLM’s 540 billion, but has consistently outperformed the model across all benchmarks and is competitive with GPT-4.
Its performance will speak louder than its spec sheet, and it’s too early to judge Olympus before its proper announcement. But the very existence of the model represents a possible major course correction for Amazon, and customers will be watching with anticipation in the months to come.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Anthropic says $30bn funding round will help ‘fuel’ frontier AI research and infrastructure expansionNews Run-rate revenue at Anthropic is surging amid continued enterprise adoption
-
CVEs are set to top 50,000 this yearNews While the CVE figures might be daunting, they won't all be relevant to your organization
-
Google says hacker groups are using Gemini to augment attacks – and companies are even ‘stealing’ its modelsNews Google Threat Intelligence Group has shut down repeated attempts to misuse the Gemini model family
-
‘The fastest adoption of any model in our history’: Sundar Pichai hails AI gains as Google Cloud growth, Gemini popularity surgesNews The company’s cloud unit beat Wall Street expectations as it continues to play a key role in driving AI adoption
-
OpenAI's Codex app is now available on macOS – and it’s free for some ChatGPT users for a limited timeNews OpenAI has rolled out the macOS app to help developers make more use of Codex in their work
-
Amazon’s rumored OpenAI investment points to a “lack of confidence” in Nova model rangeNews The hyperscaler is among a number of firms targeting investment in the company
-
OpenAI admits 'losing access to GPT‑4o will feel frustrating' for users – the company is pushing ahead with retirement plans anwayNews OpenAI has confirmed plans to retire its popular GPT-4o model in February, citing increased uptake of its newer GPT-5 model range.
-
‘In the model race, it still trails’: Meta’s huge AI spending plans show it’s struggling to keep pace with OpenAI and Google – Mark Zuckerberg thinks the launch of agents that ‘really work’ will be the keyNews Meta CEO Mark Zuckerberg promises new models this year "will be good" as the tech giant looks to catch up in the AI race
-
DeepSeek rocked Silicon Valley in January 2025 – one year on it looks set to shake things up again with a powerful new model releaseAnalysis The Chinese AI company sent Silicon Valley into meltdown last year and it could rock the boat again with an upcoming model
-
Google’s Apple deal is a major seal of approval for Gemini – and a sure sign it's beginning to pull ahead of OpenAI in the AI raceAnalysis Apple opting for Google's models to underpin Siri and Apple Intelligence is a major seal of approval for the tech giant's Gemini range – and a sure sign it's pulling ahead in the AI race.