Who is Mustafa Suleyman?

From Oxford drop out to ethical AI pioneer, Mustafa Suleyman is one of the biggest players in AI

Mustafa Suleyman looking at the camera and smiling while sitting on a white sofa
(Image credit: Getty Images)

The AI space is dominated by big personalities and bold claims over the future of the technology. It’s hard to go a week in 2025 without a mention of ‘artificial general intelligence’ and the promise (or doom) that it will herald.

With the global tech industry’s razor sharp focus on the roll-out of the technology in recent years, calm, conservative voices have been welcomed: Mustafa Suleyman certainly ranks among one of them.

Suleyman commands a gleaming reputation in the AI field and has been a highly vocal figure in the space for well over a decade now. Having dropped out of a degree in philosophy and theology at the University of Oxford, he turned his hand to entrepreneurship and eventually co-founding DeepMind in 2010. Four years later, the company was sold to search and cloud giant Google.

Suleyman stayed at what is now Google DeepMind until 2019, when he joined the parent company in a policy role, before departing in 2022 to found Inflection AI.

His involvement with Google DeepMind is what made his most recent career move in a bombshell: in March 2024 Suleyman was unveiled as head of Microsoft’s consumer AI division, charged with driving the tech giant’s push in the generative AI race.

Serving as CEO of Microsoft AI, Suleyman holds a place on the senior leadership team, reporting directly to chief executive Satya Nadella, and steers the research and development of AI products including Microsoft’s flagship Copilot service, Bing, and Edge.

Suleyman’s appointment was a major coup for Microsoft, given his foundational role in the development of Google’s AI capabilities.

Ethical AI development

As well as being a leading light of the AI industry and UK tech royalty, Suleyman – perhaps drawing on his background in philosophy – has become a well-known contributor to the discussions around AI ethics.

Suleyman’s stance has often erred on the side of caution, emphasizing the responsibility placed on enterprises developing these technologies and the need for ethical development practices, oversight, and governance.

In his 2023 book, The Coming Wave: AI, Power, and our Future, Suleyman noted that technologies such as AI will “usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen”. At the same time, he warned the rapid proliferation of the technology could “empower a diverse array of bad actors to unleash disruption, instability, and even catastrophe”.

Suleyman has consistently backed up this stance on the technology throughout his time in the big tech limelight. While still at DeepMind, he played a key role in steering the company’s Ethics & Society unit, established to help “explore and understand the real-world impacts of AI”.

More recently, in a December 2024 TED Talk, Suleyman explained how his approach to talking about and thinking about AI had changed with the arrival of generative AI.

“For years we in the AI community – and I specifically – have had a tendency to refer to this as just tools but that doesn’t really capture what’s actually happening here. AIs are clearly more dynamic, more ambiguous, more integrated, and more emergent than mere tools, which are entirely subject to human control. So to contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species.”

“Just pause for a moment and think about what they really do,” he continued. “They communicate in our languages, they see what we see, they consume unimaginably large amounts of information. They have memory, they have creativity, they can even reason to some extent and formulate rudimentary plans. They can act autonomously if we allow them. And they do all this at levels of sophistication that is far beyond anything that we’ve ever known from a mere tool.”

“I think this frame helps sharpen our focus on the critical issues: What are the risks? What are the boundaries that we need to impose? What kind of AI do we want to build or allow to be built? This is a story that’s still unfolding.”

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.