Now you can try out Google Gemini Pro for yourself

Google Gemini logo on a black background
(Image credit: Google)

Developers have been offered a chance to try out Google’s powerful new Gemini Pro AI model following its high-profile arrival last week

Google has said that Gemini can run on everything from data centers to smartphones. 

The launch of the new model represents the tech giant’s best hope of catching up with OpenAI’s ChatGPT, which has been grabbing headlines – and user momentum – over the last year.

The first iteration of the model, Gemini 1.0, will come in three sizes - the lightweight Gemini Nano for smartphones and other small devices, Gemini Pro for scaling across a wide range of applications, and Gemini Ultra for highly complex tasks.

Google has touted good test results for Gemini Ultra, and noted that across areas such as natural image, audio, and video understanding and mathematical reasoning, it beats the current state-of-the-art results on 30 of the 32 top benchmarks for large language models.

The firm made sure to highlight where Gemini Ultra beats GPT-4, however, Ultra won’t be available until next year while OpenAI's flagship model has been out since March.

Now Gemini Pro is available for developers and enterprises to build for their own use cases.

It’s available in two ways - via Google AI Studio, a free web-based tool for developers to build their prompts, and through Google’s more comprehensive Vertex AI platform, which allows companies to build production-grade agents using their own data.

Gemini Pro: What can users expect?

Google said Gemini Pro currently accepts text as input and generates text as output, although there is also a Gemini Pro Vision version available that accepts text and imagery as input, with text output. 

Software Development Kits for Gemini Pro are now openly available and support a range of coding languages, including Python, Android (Kotlin), Node.js, Swift, and JavaScript.

The Google AI Studio developer tool offers a free limit of 60 requests per minute.

However, Google said to help it improve product quality, when developers use the free quota, their API and Google AI Studio input and output “may be accessible” to trained reviewers, although the data is “de-identified” from their Google account and API key.

With Vertex AI, developers have access to the same Gemini models, but will be able to tune it with their own company’s data or include up-to-minute information and extensions to take real-world actions.

Google has also unveiled an upgraded version of its image model, Imagen 2, which improves photorealism, text rendering, and logo generation capabilities.

Similarly, the firm announced MedLM, a family of foundation models fine-tuned for healthcare industry use cases.

When Gemini was first unveiled, Google chief executive Sundar Pichai said the company was “nearly eight years into our journey as an AI-first company”.

But it has also clearly been wrong-footed by the unexpected and huge success of ChatGPT. The wave of announcements around Gemini will be aimed at countering that – especially as OpenAI has been battling controversy in recent weeks.


A whitepaper from BT on how to develop a mobile-first mindset, with image of female worker looking at mobile phone

(Image credit: BT)

Discover how a mobile-first mindset can improve team cohesion


Google released a video showing some of the capabilities of Gemini, and although it’s impressive, the firm noted the latency has been reduced and Gemini outputs have been shortened for brevity, which make it a bit less fancy.

Even so, Gemini is that latest entrant to the AI race. It was specifically designed as a multimodal model, which means it has been trained to recognize and understand text, images and audio at the same time, so it better understands nuanced information and can answer questions relating to complicated topics.

This makes it better at explaining reasoning in complex subjects like math and physics, Google claimed. The first version of Gemini can explain and generate code in the programming languages including Python, Java, C++, and Go.

Gemini Pro is already being put to use as part of Google’s Bard chatbot, while Gemini Nano is running on Google’s Pixel 8 Pro smartphone where it powers the Summarize in the Recorder app and Smart Reply in Gboard.

Gemini Ultra is listed by Google as “coming soon” because it’s still undergoing extensive trust and safety checks. Ultra will be offered to some customers and developers for early experimentation and feedback before an eventual roll-out for developers and enterprise customers in early 2024.

Steve Ranger

Steve Ranger is an award-winning reporter and editor who writes about technology and business. Previously he was the editorial director at ZDNET and the editor of