Microsoft, in collaboration with OpenAI, unveils AI supercomputer

Microsoft sign on a building
(Image credit: Shutterstock)

During its Build 2020 developer conference on Tuesday, Microsoft unveiled the new supercomputer built in collaboration with OpenAI, an artificial intelligence startup found by Elon Musk. Microsoft announced that the supercomputer hosted in Azure was developed exclusively to train OpenAI’s large-scale artificial intelligence models.

Microsoft joined hands with OpenAI back in 2019 under a multiyear supercomputer partnership, where the tech giant invested $1 billion. The supercomputer developed is a combination of over 285,000 CPU cores, 10,000 GPUs and 400 Gbps of network connectivity for each GPU server in a single system. This has helped Microsoft’s supercomputer developed for OpenAI bag one of the top five positions on the list of the top 500 supercomputers in the world.

Hosted in Azure, the supercomputer benefits from robust modern cloud infrastructure, sustainable data centers, rapid deployments and access to Azure services.

“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now,” Microsoft’s chief technical officer, Kevin Scott, explained.

Previously, there have been AI implementations dedicated to performing single tasks, like translating languages and recognizing specific objects in images. However, modern research focuses on developing massive models to perform multiple tasks at the same time. According to Microsoft, this can involve moderating game streams or possibly creating codes after analyzing Github. Convincingly, such large-scale models will make AI more beneficial to consumers and developers alike.

As a part of its ‘AI at Scale’ Microsoft has built a cluster of large AI models - the Microsoft Turing models, to improve the language understanding tasks across Bing, Dynamic, Office, and other productivity products.

Microsoft intends to make its large AI models, optimization tools and supercomputing resources available to developers, data scientists and business customers through Azure AI services and GitHub to help them leverage the power of AI at Scale.

Microsoft also revealed a new version of DeepSpeed, an open-source deep-learning library for PyTorch that cuts the amount of computing power required for large distributed model training. The update is more efficient than the previous version released just three months ago and train models 15 times larger and 10 times faster.

Susan Johnson is a content writer and a doctor in the making. She's on the mission to make boring content sparkle.