CHRO relevance is on the line
Just 7% of CEOs see CHROs as AI-savvy. To stay relevant, HR leaders must lead human-AI work design.
“Thank you for setting the great foundation for my promotion; now I have a plan!"
Curious to see how AI Coaching can 10X the impact and scale of your development initiatives. Book a demo today for:
The early adopters are already gone. They moved fast, found value, and built momentum while everyone else debated strategy. The approach most organizations take to AI adoption loses both groups that matter most.
Taylor Malmsheimer, COO of Section, works with hundreds of organizations navigating AI transformation. Speaking at Section's AI:ROI Conference, she identified 10 to 15 consistent barriers that prevent companies from capturing value from AI. The most common barrier after establishing your "why of AI" catches most executives off guard: access.
When executives dismiss this concern with "we've chosen Copilot, we've given the team access," they're missing the real story. Organizations that report significant ROI move quickly on pilots, wrapping them in one to two months rather than lingering for six months or more. The reason comes down to two groups pulling in opposite directions during slow rollouts.
Your 10 to 15% of natural early adopters have already experimented with AI on their own. They're impatient for official tools and clear direction. Meanwhile, the remaining 85% of your workforce hates change and needs convincing before they'll engage. Extended pilots satisfy neither group.
Eighty percent of executives report they've piloted or deployed an enterprise LLM. Employees tell a different story. Only a third of employees at organizations that approve of AI say they have access to a company-sponsored LLM.
That statistic excludes employees whose employers discourage or stay silent about AI. Even among companies that vocally approve of AI use, two-thirds of employees lack access to the tools leadership thinks they're providing. And sentiment reflects the gap. In recent Workday research, 44% of employee comments about AI and business strategy were negative, a clear signal that enthusiasm at the top isn’t translating into trust or access on the ground.
Rollouts remain messy. A few people get granted access to a GPT Teams account. More people hear about it and request access. Suddenly no one knows who has access to what tools or why. Sometimes a platform gets rolled out to a small team, leadership gets spooked, and capabilities get pulled back or hampered. Very often access is opt-in rather than opt-out, and employees don't know they should ask.
You haven't included the best model or the best capabilities. During IT or procurement processes, the platform gets whittled down to exclude the most powerful features.
For example, organizations roll out ChatGPT without turning on custom GPTs: executives who've found significant ROI from their AI investments react with shock when they learn this. They've realized that advanced capabilities empower employees to understand better what AI can actually do for them.
When beginners experience a hamstrung platform, they assume AI doesn't work well. They have confirmation bias. They want to believe AI isn't as powerful as others claim. Advanced users get frustrated because they know these capabilities exist. They're using them in their personal lives. Shadow AI rates go through the roof.
This works if you execute a fast pilot in one to two months. Problems emerge when pilots linger longer. Novices and beginners get anxious. They hear about AI use but aren't experiencing it themselves. They start wondering more about their job security and the quality of work coming out of AI. They become more resistant. Advanced users without access get frustrated and go underground. Shadow AI use spikes.
If pilots extend beyond one to two months, you lose momentum with early adopters while building anxiety among the majority. Extended testing signals uncertainty from leadership about whether AI delivers value. Natural resistance to change hardens rather than softens.
Taylor emphasizes that AI creates unique emotional responses. "People are thinking about how does it impact their creativity? How does it impact their job security? What does it mean if AI is great at something you've thought your whole life you were great at?"
Every day of delay gives employees more time to fill in their own narrative. Without quick proof points and clear direction, fear and speculation take over.
One to two months provides enough time to test functionality and identify early use cases without losing momentum. The compressed timeline forces teams to focus on critical questions instead of exploring every edge case. Early adopters stay engaged because they see movement.
Short pilots let you iterate quickly. When something doesn't work, you're only one or two months in rather than halfway through a six-month timeline. You can adjust and move forward without the sunk cost fallacy weighing on decisions.
If you're going to invest in an LLM, you have to go all-in to get real transformative ROI. Half measures don't work. Your mandate: get every knowledge worker access to an enterprise LLM with the most advanced capabilities you can turn on.
You also need a fast, efficient approval process to allow functional teams to pilot tools that cater to their specific goals. Organizations consistently report frustration that it takes six to nine months to get approval to test a marketing-specific platform. That timeline doesn't work at the rate these platforms change.
Based on Section's research, more than 75% of employees in typical companies say there isn't a formal AI strategy, even after leadership has deployed multiple AI tools. When pilots drag on, that perception of strategic confusion deepens.
Short pilots work really well if you've established what Taylor calls the "why of AI." This manifesto ties AI directly to your core business mission and gives employees the context they need to engage productively during compressed timelines.
Without this foundation, a short pilot looks rushed. With it, speed communicates confidence and urgency around something that matters to organizational survival or competitive position.
Your why cannot center only on efficiency. "You can't rally your employees around this to really drive the usage and the experimentation you need," Taylor warns. "It introduces too many questions around job security and who benefits from that efficiency."
The efficiency gains follow, but they're not the rallying cry.
The point isn't perfection. The point is learning fast enough to make informed decisions while maintaining momentum. When the pilot ends, you should know three things.
- Does this solution connect to your why of AI in ways employees find compelling?
- What specific barriers prevent broader adoption?
- What would success look like at scale?
If you can answer those questions after one to two months, you're ready to decide. Move forward with adjustments, run a second focused pilot on a different use case, or stop and redirect resources.
We've got answers here, and if you need more help, feel free to reach out to us through our contact form.