AI, hallucinations and Foo Fighters at Dreamforce 2023

The Foo Fighters drummer and lead singer playing a gig
(Image credit: Getty Images)

Dreamforce 2023 was, by all accounts, a resounding success, providing Salesforce an opportunity to showcase its razor-sharp focus on generative AI and ethical AI goals before the more than 40,000 attendees who descended on San Francisco. 

Tech firms always mark events like Dreamforce with an A-list performance, with Foo Fighters rocking out the Chase Center on the penultimate night of an overwhelming week. 

Often enough, these acts have nothing to do with the event – Salt-N-Pepa played Databricks Data+AI Summit 2023, for example – but at Dreamforce, this choice was oddly appropriate.

During WWII, Allied pilots reported seeing floating orbs – dubbed ‘foo fighters’ – flying beside their planes while on bombing raids over occupied Europe. 

B-25 planes flying in WWII

(Image credit: Getty Images)

This phenomenon captivated people the world over, with some claiming they were otherworldly entities keeping tabs on a warring human race. Others, however, suggested they could have been hallucinations arising from combat-related stress. Flak cannons and marauding Messerschmitt fighter planes tend to have that effect.

The end of WWII heralded the beginning of a new era and a period of rapid technological advancement. Much like then, we find ourselves at the precipice of another tectonic shift. Generative AI is the term on everyone’s lips, with the emergence of ChatGPT prompting a huge wave of hype and optimism. 

But, much like the Allied pilots who reported those elusive foo fighters lingering ominously on the horizon, firms operating in this space are encountering new specters in the form of regulatory scrutiny, customer hesitancy, and AI’s own dangerous hallucinations. 

AI hallucinations: Magic or lies?

During his keynote, Salesforce chief executive Marc Benioff outlined the company’s ambitious plans to capitalize on the AI era.

For many, generative AI represents the technological promise humanity has been striving to achieve for decades – relieving workers from monotony and opening up more time to pursue meaningful work. 

But that’s the sunlit uplands outlook there. Behind the glamorous veneer of generative AI are lingering concerns over data privacy, workforce upheaval, and hallucination-induced calamity. 

Benioff fired a broadside at large language model (LLM) companies during his opening keynote, suggesting their hunger for data without any regard for ethics or safety may put enterprises at risk of being exploited. Benioff and others singled out the lack of safeguards as a key inhibitor to long-term success and customer adoption.

Sam Altman and Marc Benioff in discussion at Dreamforce 2023

(Image credit: Salesforce)

“They call them hallucinations,” he said. “I call them lies.”

Benioff’s bold exclamation came just hours before a fireside chat with OpenAI CEO Sam Altman, in which he probed the controversial exec on this issue. Altman’s relaxed views on AI hallucinations are unlikely to fill users, enterprises, and regulators alike with confidence. 

To him, this is what gives generative AI its “magic” – putting him at odds with Benieoff and Salesforce as a whole. In this new generative AI era, according to the tech giant, trust and confidence is critical.

Doubling down on trust and transparency

We heard ‘trust’ a lot throughout Dreamforce 2023, with Salesforce repeatedly keen to emphasize its responsible approach to AI development. 

The firm has a “laser sharp focus” on ethical, responsible, and trusted AI development and use, said its head of ethical AI, Kathy Baxter – and it’s a focus the company has maintained for several years. 

She stressed the need to respect human rights and data the company has been trusted with, adding it never uses customers’ data to train models without explicit consent.

“All of the models we’re building … [use] consented data and so customers can feel confident in it,” she told reporters. “We’re grounding these models in our customers' data as well so they can feel confident that it’s going to be accurate.”

It’s obvious Salesforce has been using ‘trust’ to differentiate it from industry counterparts. In 2019, the company published its trusted AI principles, Baxter noted, which have “guided the creation and implementation of AI ever since”. 

It also created a dedicated ethical and humane use officer to steer the responsible use of AI, and it’s proud to hold the mantle of creating the industry’s first chief trust officer to support this.

RELATED RESOURCE

Whitepaper cover with title and shaded blue to white image of stacked squares

(Image credit: IBM)

Get a full return on your Salesforce investment.

DOWNLOAD FOR FREE

But that’s not all. Clara Shih, CEO of Salesforce AI repeatedly emphasized the importance of ethical and trusted AI use, suggesting it’s a key talking point among customers. Internal research shows while 76% of customers trust companies to make “honest claims” about AI products and services, just 57% trust them to use AI ethically. 

“What we’re hearing from customers is always around trust,” she said. “How do I make sure from a data security, governance perspective, and from an ethics and responsibility perspective, that we have all the guardrails in place?”

An AI future without ‘foo fighters’

This sharp privacy focus could stand Salesforce in good stead with both customers and lawmakers on both sides of the Atlantic amidst a period of heightened regulatory scrutiny. The EU’s strong position on generative AI, for instance, may worry industry stakeholders, who see it as detrimental to innovation. 

In this context, Salesforce is strategically positioning itself to become the gold standard for generative AI “done right”, in the eyes of regulators. 

“We’re just at the beginning of that [generative AI] breakthrough,” Benioff told reporters last week – and he’s not wrong. It’s clear he, and Salesforce, see the coming months and years as a period of significant opportunity, and a chance to pull away from the competition. 

The firm started 2023 in a precarious position, having laid off thousands of workers. But its performance in recent months has been impressive, with much of it fueled by the interest in generative AI. Then, as Dreamforce 2023 drew to a close, the firm announced it plans to hire more than 3,000 staff across its engineering, sales, and data cloud segments.

READ MORE

It’s obvious that Salesforce has a clear-cut vision for AI in which it's stripping itself of the issues that might undermine the technology's pathway to maturity – including dangerous hallucinations.

With responsible development at its core, the tech giant is framing itself as a trusted player in the space to differentiate it from competitors, many of which often find themselves embroiled in controversy.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.