AI Training vs Experimentation
and why the research gets it wrong
There has been a lot of talk recently about the difference between having formal AI training at work or giving teams the ability to experiment to find new ways of operating.
Some big research companies out there are saying that AI experimentation slows the “time-to-value” opportunity. That letting teams figure it out on their own creates inefficiency. That you need structured training programs to get value fast.
I don’t fully agree.
I understand where they’re coming from. You don’t want a handful of teams who are operating in silos to be creating the same things, or all focused on end goals that might not actually matter. You don’t want five different departments all building their own version of the same tool because nobody’s talking to each other.
That’s waste.
That’s inefficiency.
I get it. But I think that’s a very small piece of the puzzle. And more importantly, I think it’s a problem you solve with communication and planning, not by shutting down experimentation.
I know there are a lot of companies out there who are trying to sell you on training your whole teams around how to leverage AI. I think that’s a great thing for certain organizations. Companies over 5,000 people likely benefit a lot from that by having everyone on the same page. When you’re that big, standardization matters.
Consistency matters.
You need frameworks and shared language and structured rollouts. But for companies under that, or for organizations with the willingness to lean into innovation, there’s a better way. And it starts with recognizing something that formal training programs often miss.
In a company, the only true experts on your processes, procedures, back-end quirks are your people.
Not consultants.
Not external trainers.
Not the team building the curriculum.
Your people.
They know the pain points in and out, and I can bet they have been asking for solutions for a long time. They know where the system breaks. Where the manual work piles up. Where things take three times longer than they should. So why wouldn’t we tap into this expertise within your business?
Why would we assume that someone from outside, who’s never lived in your workflows, knows better how to apply AI than the people who deal with those workflows every day?
Let’s take, for example, an HR team who struggles with coming up with tangible insights from your yearly employee survey. Maybe there is so much data that it’s hard for them to pull the right insights. They’re drowning in responses but can’t find the patterns. Maybe the questions aren’t right for your population and don’t result in glimpses that could even address real business problems.
You’re asking things that sound good but don’t actually tell you anything useful. A team at this point could do a couple things. They could hire an external company to do all of this for them. That’s a great option and there may be AI integrations, but in most cases, it only makes sense when you have a large employee base. And even then, you’re paying someone else to interpret your data, your people, your culture.
They’ll give you a report. It might even be a good report. But it’s still filtered through someone who doesn’t live in your organization. They could have a current team member muddle through trying to make it better. Manual coding of responses. Spreadsheets. Pivot tables. Hours of reading through comments trying to find themes.
I’d assume this would have been explored already. And I’d also assume it’s exactly why they’re frustrated. Or, they could leverage AI and experiment with how best to get the data they need. They could take those thousands of survey responses and ask an LLM to find patterns. They could test different prompts to see what surfaces insights versus what just summarizes. They could try using AI to help write better survey questions in the first place. Questions that actually get at what they need to know.
If I were leading this team, I’d go with the last option every time. We don’t need some crazy formal training on how to leverage AI in this case. We don’t need a three-day workshop on prompt engineering or a certification program. Nor do we need to keep on the same path we had been going, hoping it magically gets better.
We should be allowing teams, who again are the experts in the company, to experiment and find new ways to leverage AI to build a better employee survey with proper analytics.
Give them time.
Give them permission.
Give them support when they get stuck.
But let them figure out what works for their specific problem. Here’s what I think happens when you do this. The team learns faster because they’re solving real problems, not hypothetical ones. The solutions are better because they’re built for your actual workflows, not generic best practices.
And the team actually owns the outcome instead of depending on someone else to tell them how to do their jobs. From my perspective, this is where the research around time-to-value misses the point.
If we look at the real challenge to this thought that people are bringing forward, it’s around the time-to-value I mentioned earlier. The concern is that experimentation takes too long. That you’ll waste time on dead ends. That formal training gets everyone productive faster. But I feel this is built on the assumption that you don’t have a plan or teams communicating with each other. That experimentation means chaos. That letting people try things means nobody’s aligned. That’s not experimentation. That’s just lack of coordination.
However, if you have proper planning and communication, I think you are well on your way to seeing huge internal innovation, with a short time-to-value, from your internal teams who are experimenting with AI. You set some guardrails. You create space for teams to share what they’re learning. You make sure someone’s paying attention to what’s working and what’s not. You avoid the duplicate effort problem by actually talking to each other.
The key isn’t choosing between training and experimentation. It’s recognizing that formal training gives you theory and experimentation gives you practice.
And in my experience, practice beats theory almost every time when you’re trying to solve real problems. I’m not saying training has no place. If you’re rolling out a specific tool company-wide, train people on that tool. If you need everyone to understand basic safety or compliance around AI, train on that.
But if you’re trying to figure out how AI can actually improve how your teams work, let them experiment. The teams who know the problems best are the ones most likely to find solutions that actually stick.
The training can come later, once you know what’s worth training on.
What’s your take? Are you leaning toward formal training or letting teams experiment? And if you’re experimenting, what’s working and what’s not?

