Amazon Olympus could be the LLM to rival OpenAI and Google
With a reported two trillion parameters, Amazon Olympus would be among the most powerful models available
Amazon is working on the development of a new large language model (LLM) known as ‘Olympus’ in a bid to topple ChatGPT and Bard, according to reports.
Sources at the company told The Information that the tech giant is working on the LLM and has allocated both resources and staff from its Alexa AI and science teams to spearhead its creation.
Development of the model is being led by Rohit Prasad, former head of Alexa turned lead scientist for artificial general intelligence (AGI)according to Reuters.
Prasad moved into the role to specifically focus on generative AI development as the company seeks to contend with industry competitors such as Microsoft-backed OpenAI and Google.
According to sources, the Amazon Olympus model will have two trillion parameters. If correct, this would make it one of the largest and most powerful models currently in production.
By contrast, OpenAI’s GPT-4, the current market leading model, boasts one trillion parameters.
Olympus could be rolled out as early as December and there is a possibility the model could be used to support retail, Alexa, and AWS operations.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Is Amazon Olympus the successor to Titan?
RELATED RESOURCE
Find out what leading Salesforce customers are doing to deliver enterprise value
DOWNLOAD NOW
Amazon already has its Titan foundation models, which are available for AWS customers as part of its Bedrock framework.
Amazon Bedrock offers customers a variety of foundation models, including models from AI21 Labs and Anthropic, which the tech giant recently backed with a multi-billion-dollar investment.
Amazon Olympus could be the natural evolution of Amazon’s LLM ambitions. Earlier this year, the company revealed it planned to increase investment in the development of LLMs and generative AI tools.
ITPro has approached Amazon for comment.
Amazon Olympus deviates from Bedrock "ethos"
Olympus could be seen as a departure from Amazon’s AI strategy to date, with Amazon Bedrock having set the firm aside from the AI ‘arms race’ of Google and Microsoft by focusing on providing as wide a range of third-party AI models as possible in addition to its powerful Titan foundation models.
A core feature of Bedrock in demonstrations has been the ability to use multiple models across a single process like a production line, with models assigned to constituent tasks that play into their individual strengths.
The firm showed an example of this at its AWS Summit London conference, in which it showed a marketing department using a Titan model to SEO-optimize a product, Anthropic’s Claude to generate the product description, and Stability AI’s StableDiffusion to produce the product image.
If the firm has diverted some of its attention to creating a ‘killer’ AI model, one that will be able to outperform most or all models on its own, one could question where that fits in with Bedrock’s ethos.

Rory Bathgate is Features & Multimedia Editor at ITPro, leading our in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
The reports stating that the model has two trillion parameters don’t stretch believability, and this would almost certainly cement Olympus as the largest LLM on the market at double the reported size of GPT-4.
Whether Olympus’ reportedly mammoth size translates into raw performance remains to be seen. LLMs are complex frameworks shaped as much by their training and fine-tuning as by the scale of the data they access, and experts in the field have been divided for years on whether bigger is better when it comes to generative AI.
For many years, it was thought that there could be a cutoff point for generative AI model size, beyond which the model would lose its specificity and begin providing vague, generalized answers. We have long since broken that barrier, and the success many companies have charted with their models has seemingly proved this sentiment wrong.
But there are also examples of smaller models performing at the same level, or better than, their larger counterparts. Google’s Palm 2, which powers its Bard chatbot, has 340 billion parameters compared to the original PaLM’s 540 billion, but has consistently outperformed the model across all benchmarks and is competitive with GPT-4.
Its performance will speak louder than its spec sheet, and it’s too early to judge Olympus before its proper announcement. But the very existence of the model represents a possible major course correction for Amazon, and customers will be watching with anticipation in the months to come.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
TPUs: Google's home advantageITPro Podcast How does TPU v7 stack up against Nvidia's latest chips – and can Google scale AI using only its own supply?
-
Microsoft Excel is still alive and kicking at 40News A recent survey found Gen Z and Millennial finance professionals have a strong “emotional attachment” to Microsoft Excel
-
OpenAI turns to red teamers to prevent malicious ChatGPT use as company warns future models could pose 'high' security riskNews The ChatGPT maker wants to keep defenders ahead of attackers when it comes to AI security tools
-
Google DeepMind partners with UK government to boost AI researchNews The deal includes the development of a new AI research lab, as well as access to tools to improve government efficiency
-
AWS has dived headfirst into the agentic AI hype cycle, but old tricks will help it chart new watersOpinion While AWS has jumped on the agentic AI hype train, its reputation as a no-nonsense, reliable cloud provider will pay dividends
-
Want to build your own frontier AI model? Amazon Nova Forge can help with thatNews The new service aims to lower bar for enterprises without the financial resources to build in-house frontier models
-
AWS CEO Matt Garman says AI agents will have 'as much impact on your business as the internet or cloud'News Garman told attendees at AWS re:Invent that AI agents represent a paradigm shift in the trajectory of AI and will finally unlock returns on investment for enterprises.
-
AWS targets IT modernization gains with new agentic AI features in TransformNews New custom agents aim to speed up legacy code modernization and mainframe overhauls
-
Moving generative AI from proof of concept to production: a strategic guide for public sector successGenerative AI can transform the public sector but not without concrete plans for adoption and modernized infrastructure
-
Google blows away competition with powerful new Gemini 3 modelNews Gemini 3 is the hyperscaler’s most powerful model yet and state of the art on almost every AI benchmark going