Reengineering HR with AI
Week 2
An employee told me in their exit interview that they’d been frustrated with our promotion process for eight months. Eight months. I asked why they never said anything. They looked at me like I was insane. “Who was I supposed to tell? My manager never had the answers and always sent me to various people. Everyone always seemed too busy to talk to me.”
They were right. We were busy.
Running programs, fixing problems, responding to escalations.
But we’d never once asked them, or anyone else, if what we were doing was actually working.
HR treats employees and managers like customers we never survey. We build performance review processes, run engagement programs, roll out new tools, and we never stop to ask “Is this working for you?” We just assume the silence means success.
I’ve sat in countless HR team meetings where we debated whether our new feedback process was effective. Someone would say “I think people like it.” Someone else would counter with “I heard a few complaints.” And we’d make decisions based on vibes and the three people who happened to email us.
Meanwhile, every other function in the company obsesses over stakeholder feedback. Sales talks to customers constantly. Product runs user testing. Marketing measures everything. But HR? We launch a new goal-setting process and if no one actively revolts, we call it a win.
The annual engagement survey doesn’t solve this. By the time people fill it out, they’ve been sitting on frustrations for months. They’re answering questions we wrote, not telling us what actually matters to them. And half the organization doesn’t even bother responding because they’ve learned nothing changes anyway.
I watched this play out with a performance review process we spent six months building. The CoE was proud of it. We’d simplified the forms, clarified the ratings, added better manager training. Launch day came and went without major issues.
Three months later, a senior manager pulled me aside. “This new process is a disaster for my team. The timeline doesn’t work with our project cycles. The questions don’t fit what we actually do. I’ve had four people ask me if they can just skip it.”
I asked why he didn’t say something sooner. He shrugged. “I figured you all knew what you were doing. And honestly, I didn’t think it would matter.”
That’s the cost. Not just a broken process, but an entire organization that’s learned their feedback doesn’t matter, so they stop giving it. We lose the signal we need to actually improve. We keep investing time and money into programs that don’t work, wondering why engagement scores stay flat.
And when people finally do leave, they tell us everything in the exit interview. When it’s too late to fix anything.
I know why we don’t ask. We’re scared of what we’ll hear. If we actually asked employees whether our performance review process is useful, some of them would say no. If we asked managers whether our training programs help, we might not like the answers. It’s easier to keep moving, keep building, keep assuming we’re on the right track.
There’s also the practical reality.
We don’t have People Analytics teams in most organizations. We don’t have the capacity to run regular check-ins, analyze the feedback, and do something with it. So we stick with the annual survey and convince ourselves it’s enough.
But the bigger truth?
We’ve confused activity with impact. We measure how many training sessions we ran, how many policies we updated, how many tickets we closed. We don’t measure whether any of it actually helped the people we’re supposed to serve.
Here’s what changes when you use an AI agent to run regular pulse checks with your stakeholders.
An employee responds to a quick check-in with, “I’m frustrated with communication from leadership.” A standard survey would log that response and move on. You’d get a data point that says “communication is a problem” along with fifty other data points, and you’d have no idea what to do with any of them.
An AI agent asks the follow-up questions like, “What specific communication are you missing?”
The employee explains, “I don’t know how my project connects to the company’s priorities. My manager doesn’t know either.”
The agent digs deeper, “When’s the last time you felt clear on priorities?”
The employee thinks, “Probably the all-hands in September, but that was four months ago.”
Now you have something actionable. Not just “communication is bad,” but “people lose clarity on priorities between quarterly all-hands meetings.” That’s a problem you can actually solve.
This isn’t about replacing human connection. It’s about creating a bridge between “everything’s fine” and “I’m leaving because nothing ever changes.” The AI agent gives people a safe space to be honest without worrying about hurting someone’s feelings or seeming like they’re complaining. It asks the follow-up questions that humans often don’t have time for. And it does this regularly, not once a year when grudges have piled up.
Start small.
Next week, set up a simple AI agent. (I promise it’s easy to get one going. Take your time.) Write a prompt that asks your stakeholders one question: “What’s one thing about [specific HR process] that’s not working for you?” Have the agent follow up with “Can you give me a specific example?” and “What would make this better?”
Send it to ten managers. See what you learn. I’m betting you’ll discover at least three things you had no idea were problems.
If you’re not regularly asking your stakeholders whether your work is actually helping them, how do you know you’re not wasting everyone’s time?
You can’t claim to be strategic if you’re building programs in a vacuum. You can’t claim to care about employee experience if you never measure whether your processes make that experience better or worse.
AI agents aren’t coming for People Analytics jobs. They’re giving the rest of us a chance to finally do what we should have been doing all along: listening at scale, digging into the real problems, and fixing things before people give up and leave.
So what are you going to ask first?

