Amazon Olympus could be the LLM to rival OpenAI and Google

An Amazon.com logo on an Amazon electric delivery van designed by Rivian in the Queens borough of New York, US
(Image credit: Getty Images)

Amazon is working on the development of a new large language model (LLM) known as ‘Olympus’ in a bid to topple ChatGPT and Bard, according to reports. 

Sources at the company told The Information that the tech giant is working on the LLM and has allocated both resources and staff from its Alexa AI and science teams to spearhead its creation.  

Development of the model is being led by Rohit Prasad, former head of Alexa turned lead scientist for artificial general intelligence (AGI)according to Reuters.  

Prasad moved into the role to specifically focus on generative AI development as the company seeks to contend with industry competitors such as Microsoft-backed OpenAI and Google. 

According to sources, the Amazon Olympus model will have two trillion parameters. If correct, this would make it one of the largest and most powerful models currently in production. 

By contrast, OpenAI’s GPT-4, the current market leading model, boasts one trillion parameters. 

Olympus could be rolled out as early as December and there is a possibility the model could be used to support retail, Alexa, and AWS operations.  

Is Amazon Olympus the successor to Titan?

RELATED RESOURCE

State of Salesforce 2023-24

(Image credit: IBM)

Find out what leading Salesforce customers are doing to deliver enterprise value

DOWNLOAD NOW

Amazon already has its Titan foundation models, which are available for AWS customers as part of its Bedrock framework.  

Amazon Bedrock offers customers a variety of foundation models, including models from AI21 Labs and Anthropic, which the tech giant recently backed with a multi-billion-dollar investment.

Amazon Olympus could be the natural evolution of Amazon’s LLM ambitions. Earlier this year, the company revealed it planned to increase investment in the development of LLMs and generative AI tools. 

ITPro has approached Amazon for comment.  

Amazon Olympus deviates from Bedrock "ethos"

Olympus could be seen as a departure from Amazon’s AI strategy to date, with Amazon Bedrock having set the firm aside from the AI ‘arms race’ of Google and Microsoft by focusing on providing as wide a range of third-party AI models as possible in addition to its powerful Titan foundation models.

A core feature of Bedrock in demonstrations has been the ability to use multiple models across a single process like a production line, with models assigned to constituent tasks that play into their individual strengths. 

The firm showed an example of this at its AWS Summit London conference, in which it showed a marketing department using a Titan model to SEO-optimize a product, Anthropic’s Claude to generate the product description, and Stability AI’s StableDiffusion to produce the product image. 

If the firm has diverted some of its attention to creating a ‘killer’ AI model, one that will be able to outperform most or all models on its own, one could question where that fits in with Bedrock’s ethos.

Rory Bathgate headshot
Rory Bathgate

Rory Bathgate is Features & Multimedia Editor at ITPro, leading our in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

The reports stating that the model has two trillion parameters don’t stretch believability, and this would almost certainly cement Olympus as the largest LLM on the market at double the reported size of GPT-4.

Whether Olympus’ reportedly mammoth size translates into raw performance remains to be seen. LLMs are complex frameworks shaped as much by their training and fine-tuning as by the scale of the data they access, and experts in the field have been divided for years on whether bigger is better when it comes to generative AI.

For many years, it was thought that there could be a cutoff point for generative AI model size, beyond which the model would lose its specificity and begin providing vague, generalized answers. We have long since broken that barrier, and the success many companies have charted with their models has seemingly proved this sentiment wrong.

But there are also examples of smaller models performing at the same level, or better than, their larger counterparts. Google’s Palm 2, which powers its Bard chatbot, has 340 billion parameters compared to the original PaLM’s 540 billion, but has consistently outperformed the model across all benchmarks and is competitive with GPT-4. 

Its performance will speak louder than its spec sheet, and it’s too early to judge Olympus before its proper announcement. But the very existence of the model represents a possible major course correction for Amazon, and customers will be watching with anticipation in the months to come. 

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.