"The big obstacle isn't anything technical": Dell CTO John Roese on why companies are failing on AI adoption

 John Roese, global CTO at Dell Technologies, speaking onstage at a luminary session.
(Image credit: Future/Rory Bathgate)

A lack of clear vision for AI continues to be one of the biggest stumbling blocks for businesses looking to adopt the technology, according to John Roese, global chief technology officer (CTO) at Dell Technologies.

In conversation with ITPro at Dell Technologies World 2024, Roese states that the main barrier to enterprise AI adoption in 2024 is leaders not having clear plans for what they want to achieve using AI.

“Today the big obstacle isn't anything technical,” he says. “The customers that haven't moved yet are really struggling not with technology to use, they're still stuck on what process, what data, what is their goal.”

“We're having way more conversations with people at the most senior levels, not about what technology to use even though that’s important but more about this discussion of what makes you, you as a company. “

“If you don't understand what your core value, process, and capability is then you don't really know the answer to where you should start. But the minute you know, that's the answer.”

At Dell, for example, Roese says there are four clear areas where AI-driven improvements have justified its investment in the technology: global supply chain, global sales force, global engineering capability, and global services schedule.

“If we do any of those four better than we do today, we win. Because we’re already really good at them and we said ‘If that makes Dell, Dell’ then we’re going to apply all our energy to apply AI first.”

Roese says this process of figuring out your brand identity and goal also applies to getting your data in the right structure for AI and ensuring it’s been cleaned for use in a data pipeline, as he told ITPro at Dell Technologies World 2023. For example, several years ago Dell removed non-inclusive language from its entire content repository, as part of a wider sectoral move to drop words such as ‘blacklist’ or ‘slave’ from code and documentation.

Roese says this not only brought Dell’s data in line with its values but also, through “dumb luck” helped ready it for AI trained on its one data. As Dell’s data no longer has off-brand language, he says, “your ability to feed that into AI becomes significantly faster, because you’re no longer worried that it’s going to misrepresent your company”.

A lack of governance hindering AI innovation

Randomly picking a process to be enhanced through AI is a recipe for failure, Roese says, but he adds that this is still the hurdle at which many businesses fall. 

“The thing we discovered in many customers is that they didn't have a governance process, so the first project that showed up asking for stuff got it. By the time the third project showed up, there were no more resources. But they didn't know if the first project or the third project were the ones that were going to move the needle.”

Roese notes that some customers are stuck in an introspective phase where they don’t move into AI for fear of choosing the wrong solution or adoption strategy.

“They don't know where to apply it. They’re not saying they don't understand the technology – that may or may not be true. They're saying, ‘I don't know where to start and I'm afraid that if I take all my resources and randomly throw them like a dart with my eyes closed, I might hit the wrong target, it might have no value.’”

Dell has moved rapidly to adopt AI internally but Roese is clear that it had to be just as introspective as any other firm.

“We learned that early on, we had 800 projects at Dell that were potential candidates and we quickly whittled them down to about 16,” he tells ITPro.

When it comes to Dell’s strategy on AI, Roese is the first to admit that it’s still learning and adapting to the changing AI landscape.

“Every time we're doing it ourselves, we're learning about what the products need to be, what the ecosystem needs to be, how we get this to the broader market. So, you know, the phrase ‘drinking your own champagne’ or ‘eating your own dog food’, whichever one you prefer, we are absolutely doing that at scale.”

Although pinning down one’s vision for AI and getting to grips with your brand identity can be difficult, Roese says that the limiting factor for AI adoption being how clear your view of the technology is rather than your technical prowess is cause for great hope.

Hallucinations no longer an issue

Even as businesses continue to struggle with their AI strategy, many have now overcome some of the best-known technical issues with the technology. Asked if issues such as hallucinations are still a real concern for enterprise AI, Roese responds with a confident “no”.

‘Hallucination’ is the term for when an AI model confidently outputs incorrect information. These AI missteps were subject to intense scrutiny throughout 2022 and 2023, with OpenAI’s Sam Altman saying they are part of the ‘magic’ of generative AI and some leaders holding off on AI investment due to worries that models could spout nonsense answers to customers or produce poor-quality code.

But Roese says that in 2024, new methods and a better understanding of how to situate AI within a company’s ecosystem have reduced the impacts of hallucinations dramatically.

“Enterprise architectures aren't just about the adoption of a large language model (LLM), you don’t just take Llama 3 use it, you either fine-tune it or you connect it to your data via retrieval augmented generation (RAG) and you put countermeasures and techniques around it,” Roese explains.

RELATED WHITEPAPER

“We've worked through that, because no enterprise would legitimately use a technology that was giving out bad information all the time. You know, a year ago the concern was ‘are these things ever enterprise-ready?’ because by themselves they hallucinate, they interpret things in interesting ways.”

Roese says that once you really recognize that no AI model is working in a vacuum, concerns around hallucinations go down.

“We realized it's not them by themselves. We realized very quickly that enterprise architecture is using the data that you already have that, hopefully, is accurate and really reflects your company, its ethos, its customers, and its priorities.”

AI systems are still prone to error, as we saw with ChatGPT’s very public malfunctioning in February 2024, which is something that businesses still keep an eye on. But Roese argues that this isn’t unique to AI and is, in fact, not meaningfully different from knowing someone who regularly exaggerates or outright lies.

When we meet these people in our lives, he says, we respond by simply approaching what they say with a pinch of salt.

“The reality is now we have an ecosystem with AIs, if I'm using ChatGPT I am aware now that it is a public AI service that is prone to a higher degree of creative license and hallucination,” he tells ITPro.

“That's fine and if I want to write a haiku, that's probably okay. If I want to summarize something that I understand well, that's probably fine. But would I use it to do really proprietary confidential and mission-critical things? Probably not.”

Overall, Roese says that hallucinations come up to a far lesser degree in boardroom conversations. Even as leaders look to tackle strategic issues with AI, their understanding of the technical approaches that can improve the reliability of AI systems such as grounding models in data has improved their confidence in the technology.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.