When it comes to AI governance, most companies start with policy. The smart ones move fast to testing.

In IAPP’s AI Governance Profession Report 2025, companies like IBM, BCG, Kroll, and TELUS reveal how they’re taking governance beyond static documents — into the realm of live fire exercises, risk platforms, and red teaming.

Far from theoretical exercises, these are how top organizations are de-risking AI in production environments, aligning with regulations, and building trust inside and outside their walls.

IBM: Institutionalizing red teaming

IBM is embedding adversarial testing into its core capabilities. “As red teaming gains importance in AI governance, this skill set will be more in demand. While automation is crucial for scaling red teaming, human oversight remains essential.”

They’ve built out an AI Ethics Board, added compliance specialists to business units, and created an Office of Privacy and Responsible Technology — all while maintaining the speed of delivery through integrated governance processes.

In short: it’s about risk readiness.

BCG: A custom-built risk platform

Meanwhile, Boston Consulting Group has gone full stack. They’ve created a centralized, AI-specific risk management platform that integrates engineering, security, legal, and compliance teams into a single decision loop. “Inside the tool exists the ground truth for the company’s projects, where all relevant data around risk and mitigation efforts live.”

“By having multiple teams interact and collaborate through the use of the risk management tool, BCG has been able to find a common language around how risks are defined and actioned.”

Governance is operationalized, and updates to risks and mitigations are logged in real time.

Kroll: Offensive security meets governance

At Kroll, governance goes all the way to offensive security testing, trying to break AI systems. “This coalition makes up the core AI governance team, which, in addition to considering the legal and regulatory implications of AI use, brings specialized technical experience to client solutions, including offensive security teams, red teaming, large language models and forensic experts.”

TELUS: Purple teaming for pre-deployment safety

TELUS has taken a collaborative approach to stress testing. “The Purple Team is in reference to ‘purple teaming,’ a collaborative approach to identify both weaknesses and mitigations through adversarial testing to support the robustness of an application.”

Anyone in the organization can join the team and test new AI tools before they go live. The result?

“This diverse group and open exercise helps the DTO and the broader business gain large amounts of data on safety and functionality before releasing tools for all employees or customers, while also creating additional buy-in among stakeholders that the tools will not be released until proven safe.”

Governance as team sport.

Final thought: Real governance is stress-tested

The smartest AI leaders know governance isn’t about saying “no”, it’s about asking, “What if this goes wrong, and how do we know?” Whether it’s IBM’s red teams, BCG’s risk platform, or Kroll and TELUS’s adversarial testing methods, the message is clear: You don’t govern AI by writing rules. You govern it by poking it until it breaks, and fixing what you find.