IBM’s watsonx.governance promises to build trust in generative AI

IBM logo displayed on a smartphone screen with blurry image in background
(Image credit: Getty Images)

IBM has unveiled its new watsonx.governance platform in a bid to help organizations to responsibly manage risk and compliance when deploying large language models (LLMs).

Watsonx.governance is the third arm to IBM’s watsonx artificial intelligence (AI) and data management platform, designed to support organizations leveraging and scaling AI tools.

The new solution, generally available on 5 December 2023, will help organizations deploy generative AI tools responsibly by ensuring they adhere to their own internal AI governance policies and prepare for upcoming regulations.

The platform will provide insights into the ethical considerations of deploying LLMs as organizations look to get the most out of generative AI without exposing themselves, or their clients, to the risks AI can pose.

“Watsonx.governance is a one-stop-shop for businesses that are struggling to deploy and manage both LLM and ML models, giving businesses the tools they need to automate AI governance processes, monitor their models, and take corrective action, all with increased visibility”, said Kareem Yusuf, senior vice president, product management and growth at IBM Software.

“Its ability to translate regulations into enforceable policies will only become more essential for enterprises as new AI regulation takes hold worldwide.”

What users can expect from watsonx.governance 

The new software will enhance AI governance by meeting the intersection of three key business needs around AI deployment, according to IBM. 

This includes lifecycle management, model evaluation, and risk assessment.

Lifecycle management consists of providing automated alerts to stakeholders when models operate in ways they were not designed to and producing documentation on how models were developed and implemented. 

IBM said this will be vital in order to demonstrate organizations are complying with regulations. 

Watsonx.governance will evaluate models by producing a variety of metrics associated with the accuracy, drift, or bias of the model, IBM said. 

This includes metrics indicating the safety of the model by assessing the use of potentially harmful language, or how the input and output of personally identifiable information (PII) is handled.


Concept art showing locked packlocks with one opened coloured in red, signifying a data breach

(Image credit: Getty Images)

How to scale AI workloads taking an open data lakehouse approach

Discover how can help your organization successfully scale analytics and AI workloads for all your data.


Finally, the software will provide metrics on the health of the model based on the size of the dataset it was trained on, as well as the model’s throughput and latency.

The risk assessment component of watsonx.governance introduces the human element to governing the roll-out generative AI, according to IBM.

The tool will help automate many of the basic functions and tasks involved in AI governance while also identifying areas where human intervention is required to ensure accountability and transparency.

This includes creating auditable documents that demonstrate which agents carried out what checks relating to specific model behaviors. 

Solomon Klappholz
Staff Writer

Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.