How Salesforce approaches AI governance: practical lessons from Diane Igoe
By Author
Alexei Dunaway
Reading Time
4
mins
Date
March 24, 2026
Share
Table of Content

How Salesforce approaches AI governance: practical lessons from Diane Igoe

As companies move from AI experimentation to production deployments, governance is quickly becoming the difference between scattered pilots and scalable impact.

At the recent Section The AI:ROI Conference, Diane Igoe, Director of Agentforce Governance at Salesforce, shared how large organizations are structuring governance around AI agents and copilots.

Her approach offers a useful playbook for any company trying to scale AI responsibly. Below are the key practices she highlighted.

1. Assign clear ownership for every agent

AI agents cannot be treated like generic infrastructure. Each one must have a clear business owner.

At Salesforce, agents are managed like products.

“I have the product owners who own the agents.”

These owners are responsible for defining the use case, tracking outcomes, and maintaining the workflow.

This ensures AI deployments stay aligned with real business needs rather than becoming isolated technical experiments.

2. Build governance across multiple teams

Governance cannot live inside engineering alone.

Salesforce structures governance across three functions:

  • Business/product teams define the use case and outcomes
  • Engineering and data teams build and operate the systems
  • Legal and compliance teams define guardrails

As Diane explains:

“I have the product owners who own the agents, as well as I have legal who has to come and look at those guardrails from a compliance risk perspective.”

This structure ensures AI systems are aligned with both operational goals and risk management requirements.

3. Start internally before releasing externally

One of the strongest practices discussed was the concept of “customer zero.”

Companies should use their AI systems internally before deploying them to customers.

 “We've used it within Salesforce… we have a product manager for every internal agent.”

Internal use allows teams to:

  • test workflows
  • refine performance
  • identify risks
  • measure impact

By the time agents reach customers, the organization already understands how they behave in real environments.

4. Define ROI before scaling

Many AI projects fail because companies start with technology rather than outcomes.

A simple framework she described includes:

  1. Define the use case
  2. Identify the workflow
  3. Establish baseline metrics
  4. Measure improvement

Without this structure, organizations struggle to move beyond experimentation.

5. Choose workflows that are good candidates for automation

Not every process should be automated.

A key evaluation question:

“How many steps are involved? And what's the human judgment that's required?”

The best AI workflows tend to be:

  • Repetitive
  • High-volume
  • Measurable
  • Operationally structured

Processes requiring heavy judgment or negotiation often require hybrid human-AI approaches.

The emerging operating model for AI governance

What emerges from Salesforce’s experience is a simple but powerful idea:

AI agents should be managed like a workforce.

They need:

  • Owners
  • Governance
  • Performance metrics
  • Guardrails

Organizations that treat AI this way will be far better positioned to scale safely.

See Pascal in action.

Get a live demo of Pascal, your 24/7 AI coach inside Slack and Teams, helping teams set real goals, reflect on work, and grow more effectively.

Book a demo