10 HR predictions for AI in 2026: What CHROs told us
12 CHROs share what's reshaping HR in 2026: AI fluency, coaching, hiring fraud, and the end of cautious piloting.

As companies move from AI experimentation to production deployments, governance is quickly becoming the difference between scattered pilots and scalable impact.
At the recent Section The AI:ROI Conference, Diane Igoe, Director of Agentforce Governance at Salesforce, shared how large organizations are structuring governance around AI agents and copilots.
Her approach offers a useful playbook for any company trying to scale AI responsibly. Below are the key practices she highlighted.
AI agents cannot be treated like generic infrastructure. Each one must have a clear business owner.
At Salesforce, agents are managed like products.
“I have the product owners who own the agents.”
These owners are responsible for defining the use case, tracking outcomes, and maintaining the workflow.
This ensures AI deployments stay aligned with real business needs rather than becoming isolated technical experiments.
Governance cannot live inside engineering alone.
Salesforce structures governance across three functions:
As Diane explains:
“I have the product owners who own the agents, as well as I have legal who has to come and look at those guardrails from a compliance risk perspective.”
This structure ensures AI systems are aligned with both operational goals and risk management requirements.
One of the strongest practices discussed was the concept of “customer zero.”
Companies should use their AI systems internally before deploying them to customers.
“We've used it within Salesforce… we have a product manager for every internal agent.”
Internal use allows teams to:
By the time agents reach customers, the organization already understands how they behave in real environments.
Many AI projects fail because companies start with technology rather than outcomes.
A simple framework she described includes:
Without this structure, organizations struggle to move beyond experimentation.
Not every process should be automated.
A key evaluation question:
“How many steps are involved? And what's the human judgment that's required?”
The best AI workflows tend to be:
Processes requiring heavy judgment or negotiation often require hybrid human-AI approaches.
The emerging operating model for AI governance
What emerges from Salesforce’s experience is a simple but powerful idea:
They need:
Organizations that treat AI this way will be far better positioned to scale safely.

.png)