Microsoft, in collaboration with OpenAI, unveils AI supercomputer

AI models and tools to be opened to developers through AI at Scale Initiative

During its Build 2020 developer conference on Tuesday, Microsoft unveiled the new supercomputer built in collaboration with OpenAI, an artificial intelligence startup found by Elon Musk. Microsoft announced that the supercomputer hosted in Azure was developed exclusively to train OpenAI’s large-scale artificial intelligence models. 

Microsoft joined hands with OpenAI back in 2019 under a multiyear supercomputer partnership, where the tech giant invested $1 billion. The supercomputer developed is a combination of over 285,000 CPU cores, 10,000 GPUs and 400 Gbps of network connectivity for each GPU server in a single system. This has helped Microsoft’s supercomputer developed for OpenAI bag one of the top five positions on the list of the top 500 supercomputers in the world. 

Hosted in Azure, the supercomputer benefits from robust modern cloud infrastructure, sustainable data centers, rapid deployments and access to Azure services. 

Advertisement - Article continues below

“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now,” Microsoft’s chief technical officer, Kevin Scott, explained.

Previously, there have been AI implementations dedicated to performing single tasks, like translating languages and recognizing specific objects in images. However, modern research focuses on developing massive models to perform multiple tasks at the same time. According to Microsoft, this can involve moderating game streams or possibly creating codes after analyzing Github. Convincingly, such large-scale models will make AI more beneficial to consumers and developers alike.

Advertisement - Article continues below

As a part of its ‘AI at Scale’ Microsoft has built a cluster of large AI models - the Microsoft Turing models, to improve the language understanding tasks across Bing, Dynamic, Office, and other productivity products. 

Advertisement - Article continues below

Microsoft intends to make its large AI models, optimization tools and supercomputing resources available to developers, data scientists and business customers through Azure AI services and GitHub to help them leverage the power of AI at Scale. 

Microsoft also revealed a new version of DeepSpeed, an open-source deep-learning library for PyTorch that cuts the amount of computing power required for large distributed model training. The update is more efficient than the previous version released just three months ago and train models 15 times larger and 10 times faster. 

Featured Resources

The case for a marketing content hub

Transform your digital marketing to deliver customer expectations

Download now

Fast, flexible and compliant e-signatures for global businesses

Be at the forefront of digital transformation with electronic signatures

Download now

Why CEOS should care about the move to SAP S/4HANA

And how they can accelerate business value

Download now

IT faces new security challenges in the wake of COVID-19

Beat the crisis by learning how to secure your network

Download now



The IT Pro Podcast: Microsoft Build goes virtual

22 May 2020
cloud computing

Microsoft launches public cloud service for health care

21 May 2020
Microsoft Office

Microsoft announces Lists, a new app for Teams, SharePoint and Outlook

21 May 2020
Microsoft Windows

FedEx and Microsoft team up to transform commerce

20 May 2020

Most Popular


The top ten password-cracking techniques used by hackers

5 May 2020

Nokia breaks 5G record with speeds nearing 5Gbps

20 May 2020
Careers & training

IBM and HPE reveal cuts to jobs and executive pay

22 May 2020