The Qualitative Data Problem HR Has Always Had
You’re sitting in a meeting and someone asks why turnover spiked in Q2. You pull up the dashboard. Show the numbers. Break it down by department, tenure, role.
Then someone asks the question you can’t answer with a chart. “But why are they actually leaving?”
You have exit interviews. Performance reviews. Engagement survey comments. Hundreds of pages of words that people said. And you’ve read through them, highlighted themes, maybe even coded them manually if you had time. But you know you’re missing things. Patterns you can’t see because there’s too much text and not enough hours.
This is the problem HR has always had. We’re drowning in qualitative data but we’ve only had quantitative tools.
From my perspective, HR has spent decades trying to turn everything into numbers because that’s what we could measure. We count things. How many people left. How long roles stay open. What percentage of employees feel engaged.
And that’s useful. I’m not saying it’s not. But the numbers only tell you what happened. They don’t tell you why. They don’t capture the conversation where someone realized their manager didn’t trust them. The moment when a top performer decided to start looking. The pattern of small frustrations that added up to “I’m done.”
We’ve been trying to force the messy, complicated reality of human experience into cells in a spreadsheet. Because if we couldn’t quantify it, we couldn’t analyze it. And if we couldn’t analyze it, we couldn’t defend our recommendations in a business review.
I’ve been at companies where we have had so much data to analyze. It all came from our engagement surveys. But when we got to looking at it, there were too many “other priorities”. We didn’t spend time to dig in. To bring our human brain to the analysis of the data. So, the data sat and never saw it’s real value come forward.
Here’s what I think is different now. LLMs (known to most as ‘AI’) can process qualitative data at scale in ways we never could before.
You can feed in every exit interview from the past two years and ask it to find patterns you might have missed. Not just word frequency. Actual themes. Connections between what people said in month three and what they said on their way out the door.
You can analyze open-ended survey responses without spending three days reading through them and another two trying to summarize what you found. You can spot the signals that matter without losing them in the noise.
You can take that stack of performance reviews and understand not just what ratings people got, but how managers actually talk about their teams. Who gets described as “steady” versus “high-potential.” What language shows up before someone gets promoted versus before they plateau.
This isn’t about replacing HR judgment. It’s about finally having tools that work the way HR actually needs to work. We’ve always known the insights were in the conversations, the comments, the things people say when they think someone’s listening. We just couldn’t process it all. Now we can.
For the first time, qualitative data becomes as usable as quantitative data. Maybe more usable, because it’s closer to what’s actually happening.
You stop having to choose between depth and scale. You can have both. You can understand the individual story and the pattern across 500 individual stories.
You can move faster. That question about why turnover spiked? You don’t need a week to manually review exit interviews. You can have an answer in minutes. Not a perfect answer. Not the only answer. But a real starting point based on what people actually said.
You can get ahead of things instead of always reacting. If the LLM spots a theme in your survey data that’s showing up in ways you wouldn’t have caught, you can address it before it becomes a retention problem. Before it shows up in your turnover numbers. Before you’re explaining to leadership why half of your engineering team just quit.
There have been many times when I spent hours reading comments on surveys. One after another. It gets hard to get everything that you’re reading. The human brain goes in various directions. For me, that means going down paths and forgetting the other things I saw. With an LLM, you don’t have the meandering mind. You have hyper-focus where you need it.
This all sounds great in theory. In practice, there are real questions I don’t have answers to yet.
How do you make sure the LLM isn’t just confirming what you already think? How do you know when it’s finding real patterns versus surface-level correlations? How much do you trust it versus trust your own read of the situation?
There’s also the question of what gets lost when you use AI to process something that used to require human attention. When I read through exit interviews myself, I pick up on things that aren’t about the words. It’s in the tone. In what someone didn’t say. The difference between someone who’s frustrated and someone who’s hurt. Can an LLM do that? Should it?
And then there’s the practical reality that most HR teams are barely keeping up with the basics. We’re talking about sophisticated AI analysis when some organizations are still using spreadsheets for headcount tracking.
Here’s what I keep coming back to. HR has always been about understanding people so we can help organizations make better decisions. But we’ve been limited by what we could practically analyze.
LLMs don’t solve everything. They don’t replace the relationship-building, the conversations, the human judgment that makes HR work. But they do make it possible to actually use all that qualitative data we’ve been collecting and underutilizing for years.
The question isn’t whether AI will change HR. It’s whether we’re ready to change how we work when we can finally access insights that were always there but out of reach.
What would you do differently if you could actually analyze every conversation, comment, and piece of feedback you’ve collected? And what’s stopping you from starting to figure that out now?

