“Public trust has become the new currency for AI innovation”: Why SAS is ringing the alarm bell on AI governance for enterprises
Demonstrating responsible stewardship of AI could be the key differentiator for success with the technology, rather than simply speed of adoption


Leaders looking to implement AI should make AI governance and risk management their primary focus if they are to succeed in their adoption plans, according to a SAS executive.
Reggie Townsend, VP, Data Ethics at SAS, told assembled media at SAS Innovate 2025 that tech leaders need to consider the risks AI brings now, whether they’re ready for it or not.
To illustrate his point, Townsend said he had recently received two requests for AI use within SAS: one to use ChatGPT for account planning processes and another to use DeepSeek for internal workflow activities.
“Two separate requests, mind you, within two hours of one another, from two separate employees altogether,” Townsend noted.
“This is increasingly becoming a normal day for myself and for leaders like me around the world. So, if you just extrapolated that day over the course of a month, you’d have 40 different use cases for AI and those would only be the ones that we know about.”
While Townsend was quick to stress he has “zero desire” to know the details of every single use of AI around the company, he used the example to underline the clear need for every organization to set out internal AI policies.
To further illustrate his point, Townsend cited the IAPP’s AI Governance Profession Report 2025, which found 42% of organizations already rank AI governance as a top five strategic priority and 77% of organizations overall are working on AI governance measures.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
While the report found 90% of organizations who said they have adopted AI are pursuing AI governance, it also found 30% of those without AI are already laying the groundwork for their eventual adoption by working on an AI governance strategy.
“[This] would suggest that they’re taking a governance first approach to this thing, which I actually applaud,” Townsend said. “I like to say that trustworthy AI begins before the first line of code.”
Townsend added that every large organization in the US is either buying AI or adopting it without knowing, via AI updates to products they already use. Employees, he added, are already using AI and leaders will need to consider this demand now, not later, to ensure they don’t expose themselves to risks or undermine their AI adoption roadmap.
“The organizations that thrive won't simply be those that deploy AI first but it'll be those that deploy AI most responsibly,” he said, adding that leading organizations will recognize the strategic imperative of governance and embrace its potential benefits for innovation.
Describing governance as a “catalyst” for technological innovation, Townsend also warned of the barriers that companies who push for AI without considering safe adoption will be hurt in the long term.
“Organizations without AI governance face not just potential regulatory penalties, depending on the markets they’re in, they face a potential competitive disadvantage because public trust has become the new currency for AI innovation.”
Clear roadmaps for AI governance
SAS is targeting reliable AI, having just announced new AI models for industry-specific use cases. Townsend explained that when it comes to AI that delivers predictable results, leaders should first establish a framework for measuring actual outcomes versus the expected outcomes, to which compliance and oversight teams can refer back.
“Now, this is a matter of oversight, for sure, this is a matter of operations, and this is a matter of organizational culture,” he said.
“All of these things combined are what represent this new world of AI governance, where there’s a duality, I like to say, that exists between these conveniently accessible productivity boosters that the team has been talking about this morning, intersecting with inaccuracies and inconsistency and potential intellectual property leakage.”
To address these issues, SAS has created the AI Governance Map, a resource for customers looking to weigh up their AI maturity across oversight, compliance, operations, and culture.
This helps organizations to assess their AI governance no matter where they are with AI implementation, rather than mandating they replace current AI systems with those specified by SAS.
In the near future, SAS has committed to releasing a new holistic AI governance solution, for monitoring models and agents, as well as completing AI orchestration and aggregation. No further details on the tool have been revealed but SAS has invited interested parties to sign up for a private preview later in 2025.
Townsend’s comments echo those of Vasu Jakkal, corporate vice president at Microsoft Security, who last week told RSAC Conference 2025 attendees that governance is an irreplaceable role and would be critical as organizations adopt more AI agents.
Just as SAS is stressing the importance of governance is a key concern, Jakkal took the position that human-led governance will stay relevant as organizations look to keep tabs on the decisions that AI agents make. Both are a pared back version of the future predicted by the likes of Marc Benioff, who in February made the claim that CEOs today will be the last with a fully human workforce.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Can cyber group takedowns last?
ITPro Podcast Threat groups can recover from website takeovers or rebrand for new activity – but each successful sting provides researchers with valuable data
-
Microsoft says these 10 jobs are at highest risk of being upended by AI – but experts say there's nothing to worry about yet
News Microsoft thinks AI is going to destroy jobs across a range of industries – while experts aren't fully convinced, maybe it's time to start preparing.
-
Workers view agents as ‘important teammates’ – but the prospect of an AI 'boss' is a step too far
News Workers are comfortable working alongside AI agents, according to research from Workday, but the prospect of having an AI 'boss' is a step too far.
-
OpenAI thought it hit a home run with GPT-5 – users weren't so keen
News It’s been a tough week for OpenAI after facing criticism from users and researchers
-
DeepMind CEO Demis Hassabis thinks Meta's multi-billion dollar hiring spree shows it's scrambling to catch up in the AI race
News DeepMind CEO Demis Hassabis thinks Meta's multi-billion dollar hiring spree is "rational" given the company's current position in the generative AI space.
-
Mistral's new sustainability tracker tool shows the impact AI has on the environment – and it makes for sober reading
News The training phase for Mistral's Large 2 model was equal to the yearly consumption of over 5,o00 French citizens.
-
VC investment in AI is skyrocketing – funding in the first half of 2025 was more than the whole of last year, says EY
News The average AI deal size is growing as VCs turn their attention to later-stage companies
-
The Replit vibe coding incident gives us a glimpse into why developers are still wary of AI coding assistants
News Recent vibe coding snafus highlight the risks of AI coding assistants
-
Researchers tested over 100 leading AI models on coding tasks — nearly half produced glaring security flaws
News AI models large and small were found to introduce cross-site scripting errors and seriously struggle with secure Java generation