“Public trust has become the new currency for AI innovation”: Why SAS is ringing the alarm bell on AI governance for enterprises
Demonstrating responsible stewardship of AI could be the key differentiator for success with the technology, rather than simply speed of adoption


Leaders looking to implement AI should make AI governance and risk management their primary focus if they are to succeed in their adoption plans, according to a SAS executive.
Reggie Townsend, VP, Data Ethics at SAS, told assembled media at SAS Innovate 2025 that tech leaders need to consider the risks AI brings now, whether they’re ready for it or not.
To illustrate his point, Townsend said he had recently received two requests for AI use within SAS: one to use ChatGPT for account planning processes and another to use DeepSeek for internal workflow activities.
“Two separate requests, mind you, within two hours of one another, from two separate employees altogether,” Townsend noted.
“This is increasingly becoming a normal day for myself and for leaders like me around the world. So, if you just extrapolated that day over the course of a month, you’d have 40 different use cases for AI and those would only be the ones that we know about.”
While Townsend was quick to stress he has “zero desire” to know the details of every single use of AI around the company, he used the example to underline the clear need for every organization to set out internal AI policies.
To further illustrate his point, Townsend cited the IAPP’s AI Governance Profession Report 2025, which found 42% of organizations already rank AI governance as a top five strategic priority and 77% of organizations overall are working on AI governance measures.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
While the report found 90% of organizations who said they have adopted AI are pursuing AI governance, it also found 30% of those without AI are already laying the groundwork for their eventual adoption by working on an AI governance strategy.
“[This] would suggest that they’re taking a governance first approach to this thing, which I actually applaud,” Townsend said. “I like to say that trustworthy AI begins before the first line of code.”
Townsend added that every large organization in the US is either buying AI or adopting it without knowing, via AI updates to products they already use. Employees, he added, are already using AI and leaders will need to consider this demand now, not later, to ensure they don’t expose themselves to risks or undermine their AI adoption roadmap.
“The organizations that thrive won't simply be those that deploy AI first but it'll be those that deploy AI most responsibly,” he said, adding that leading organizations will recognize the strategic imperative of governance and embrace its potential benefits for innovation.
Describing governance as a “catalyst” for technological innovation, Townsend also warned of the barriers that companies who push for AI without considering safe adoption will be hurt in the long term.
“Organizations without AI governance face not just potential regulatory penalties, depending on the markets they’re in, they face a potential competitive disadvantage because public trust has become the new currency for AI innovation.”
Clear roadmaps for AI governance
SAS is targeting reliable AI, having just announced new AI models for industry-specific use cases. Townsend explained that when it comes to AI that delivers predictable results, leaders should first establish a framework for measuring actual outcomes versus the expected outcomes, to which compliance and oversight teams can refer back.
“Now, this is a matter of oversight, for sure, this is a matter of operations, and this is a matter of organizational culture,” he said.
“All of these things combined are what represent this new world of AI governance, where there’s a duality, I like to say, that exists between these conveniently accessible productivity boosters that the team has been talking about this morning, intersecting with inaccuracies and inconsistency and potential intellectual property leakage.”
To address these issues, SAS has created the AI Governance Map, a resource for customers looking to weigh up their AI maturity across oversight, compliance, operations, and culture.
This helps organizations to assess their AI governance no matter where they are with AI implementation, rather than mandating they replace current AI systems with those specified by SAS.
In the near future, SAS has committed to releasing a new holistic AI governance solution, for monitoring models and agents, as well as completing AI orchestration and aggregation. No further details on the tool have been revealed but SAS has invited interested parties to sign up for a private preview later in 2025.
Townsend’s comments echo those of Vasu Jakkal, corporate vice president at Microsoft Security, who last week told RSAC Conference 2025 attendees that governance is an irreplaceable role and would be critical as organizations adopt more AI agents.
Just as SAS is stressing the importance of governance is a key concern, Jakkal took the position that human-led governance will stay relevant as organizations look to keep tabs on the decisions that AI agents make. Both are a pared back version of the future predicted by the likes of Marc Benioff, who in February made the claim that CEOs today will be the last with a fully human workforce.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Cato Networks unveils revamped global partner program
News The new initiative introduces specialization tracks and partner tiers with zero upfront financial commitment
-
The days of bringing your own device to work could be coming to an end
News Security concerns are prompting a rethink of ‘bring your own device’ policies
-
SAS wants its AI agents to supercharge workers, not replace them
SAS has announced a new agentic AI service aimed at helping enterprises deploy agents alongside domain-specific AI models.
-
Microsoft says workers should believe the hype with AI tools: Researchers found Copilot users saved three hours per week sifting through emails, gained more focus time, and completed collaborative tasks 20% faster
News Using AI tools paid dividends for some workers, but alternative research shows it could create problems for others down the line.
-
AI-first partnerships: Unlocking scalable growth for business
Industry Insights Channel partners play a vital role in facilitating AI adoption, but there's more to offering support than simple integration
-
Reinventing Procurement: From Cost Center to Innovation Driver
whitepaper
-
Digital Optimisation Paves the Way to Strategic Supplier Management
whitepaper
-
Google Cloud is leaning on all its strengths to support enterprise AI
Analysis Google Cloud made a big statement at its annual conference last week, staking its claim as the go-to provider for enterprise AI adoption.
-
Meta executive denies hyping up Llama 4 benchmark scores – but what can users expect from the new models?
News A senior figure at Meta has denied claims that the tech giant boosted performance metrics for its new Llama 4 AI model range following rumors online.
-
Fake it till you make it: 79% of tech workers pretend to know more about AI than they do – and executives are the worst offenders
News Tech industry workers are exaggerating their AI knowledge and skills capabilities, and executives are among the worst offenders, new research shows.