Who is Mustafa Suleyman?
From Oxford drop out to ethical AI pioneer, Mustafa Suleyman is one of the biggest players in AI


The AI space is dominated by big personalities and bold claims over the future of the technology. It’s hard to go a week in 2025 without a mention of ‘artificial general intelligence’ and the promise (or doom) that it will herald.
With the global tech industry’s razor sharp focus on the roll-out of the technology in recent years, calm, conservative voices have been welcomed: Mustafa Suleyman certainly ranks among one of them.
Suleyman commands a gleaming reputation in the AI field and has been a highly vocal figure in the space for well over a decade now. Having dropped out of a degree in philosophy and theology at the University of Oxford, he turned his hand to entrepreneurship and eventually co-founding DeepMind in 2010. Four years later, the company was sold to search and cloud giant Google.
Suleyman stayed at what is now Google DeepMind until 2019, when he joined the parent company in a policy role, before departing in 2022 to found Inflection AI.
His involvement with Google DeepMind is what made his most recent career move in a bombshell: in March 2024 Suleyman was unveiled as head of Microsoft’s consumer AI division, charged with driving the tech giant’s push in the generative AI race.
Serving as CEO of Microsoft AI, Suleyman holds a place on the senior leadership team, reporting directly to chief executive Satya Nadella, and steers the research and development of AI products including Microsoft’s flagship Copilot service, Bing, and Edge.
Suleyman’s appointment was a major coup for Microsoft, given his foundational role in the development of Google’s AI capabilities.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Ethical AI development
As well as being a leading light of the AI industry and UK tech royalty, Suleyman – perhaps drawing on his background in philosophy – has become a well-known contributor to the discussions around AI ethics.
Suleyman’s stance has often erred on the side of caution, emphasizing the responsibility placed on enterprises developing these technologies and the need for ethical development practices, oversight, and governance.
In his 2023 book, The Coming Wave: AI, Power, and our Future, Suleyman noted that technologies such as AI will “usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen”. At the same time, he warned the rapid proliferation of the technology could “empower a diverse array of bad actors to unleash disruption, instability, and even catastrophe”.
Suleyman has consistently backed up this stance on the technology throughout his time in the big tech limelight. While still at DeepMind, he played a key role in steering the company’s Ethics & Society unit, established to help “explore and understand the real-world impacts of AI”.
More recently, in a December 2024 TED Talk, Suleyman explained how his approach to talking about and thinking about AI had changed with the arrival of generative AI.
“For years we in the AI community – and I specifically – have had a tendency to refer to this as just tools but that doesn’t really capture what’s actually happening here. AIs are clearly more dynamic, more ambiguous, more integrated, and more emergent than mere tools, which are entirely subject to human control. So to contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species.”
“Just pause for a moment and think about what they really do,” he continued. “They communicate in our languages, they see what we see, they consume unimaginably large amounts of information. They have memory, they have creativity, they can even reason to some extent and formulate rudimentary plans. They can act autonomously if we allow them. And they do all this at levels of sophistication that is far beyond anything that we’ve ever known from a mere tool.”
“I think this frame helps sharpen our focus on the critical issues: What are the risks? What are the boundaries that we need to impose? What kind of AI do we want to build or allow to be built? This is a story that’s still unfolding.”

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Cisco promises AI training for a million Americans
News The company joins Amazon, Google, and Microsoft in support of the government's Pledge to America's Youth – Investing in AI Education
-
Punishing workers for refusing to use AI is a terrible idea, but these CEOs did it anyway
News Justifying big money investment in AI projects has reached extreme levels in recent months, with some leaders even sacking employees who refuse to embrace the technology.
-
AI tools are a game changer for enterprise productivity, but reliability issues are causing major headaches – ‘everyone’s using AI, but very few know how to keep it from falling over’
News Enterprises are flocking to AI tools, but very few lack the appropriate infrastructure to drive adoption at scale
-
CFOs were skeptical about AI investment, but they’ve changed their tune since the arrival of agents
News The introduction of agentic AI has CFOs changing their outlook on the technology
-
These are the top 'soft skills' your business needs to succeed with AI
News Technical capabilities can only take a business so far with AI adoption, according to Multiverse
-
The second enforcement deadline for the EU AI Act is approaching – here’s what businesses need to know about the General-Purpose AI Code of Practice
News General-purpose AI model providers will face heightened scrutiny
-
AI skills shortages exacerbated by surging salary demands
News Hiring staff with AI skills continues to be a pain point for companies
-
Meta isn’t playing ball with the EU on the AI Act
News Europe is 'heading down the wrong path on AI', according to Meta, with the company accusing the EU of overreach