
The Hard Part of Product Was Never the Code: Why Empathy is the Ultimate AI Leverage
The dominant story about AI right now goes something like this:
Soon, every knowledge worker will have a tireless digital assistant. Software will be built by describing what you want in plain English. Products will iterate at the speed of thought. Startups that used to need 50 people will need 5. The only limit is imagination.
In that story, speed is the breakthrough. But speed only helps if you already know what matters.
AI makes it cheaper to build. It doesn’t make it clearer what’s worth building. And it definitely doesn’t make the hard calls for you. Output was never scarce. Attention is.
Building great products is an act of empathy, not output
My friend Lloyd Tabb has a line I often steal because it’s both simple and profound:
“Building great software is an act of empathy.”
Great products don’t begin with a requirements doc or a backlog of features. They begin with a moment of discomfort:
- “This workflow is maddening.”
- “Why is this form so confusing?”
- “Why does this tool make me feel stupid?”
Before there’s a roadmap, there’s frustration. Or anxiety. Or fatigue. Or a quiet sense that the tool is working against you.
Building something people love is an organized response to that pain. You notice it (in yourself or someone you care about). You care enough to understand it more deeply than it’s been articulated. Then you translate that understanding into tradeoffs that quietly remove friction.
That act—noticing, caring, interpreting, choosing—is empathy plus taste.
And taste is mostly subtraction.
The part AI can’t do
AI can pattern-match the language of pain. It can summarize interviews. It can generate ten plausible redesigns and twenty versions of the onboarding copy.
But it doesn’t experience the friction. It doesn’t have stakes. It won’t feel the embarrassment of a user who’s afraid to click the wrong button.
It doesn’t know what it’s like to be on a Zoom call at 7 pm with a screaming kid in the next room, trying to navigate a bloated CRM. It doesn’t sit with someone and absorb the awkward pause when they get lost. It doesn’t feel the internal “maybe I’m just dumb” spiral that bad software can trigger.
That gap matters.
Because the best products hinge on micro-decisions you only make when you care: what to default, what to hide, what to prevent, what to explain, what to make reversible, what to never ask a human to do again.
We keep trying to fix an empathy problem with an autocomplete engine.
The “10x faster” red herring
One of the big selling points of AI is: “Feature development will be 10x faster.” Okay. Let’s take that seriously. If you look at the last 20 years of software, would you say the world’s great unsolved problem is… not enough features? Most users aren’t wishing their tools had five times more buttons. They’re thinking:
- “I only use 10% of this.”
- “I’m scared to click anything because I’ll break it.”
- “I just want this to do the one thing I care about, reliably.”
Feature velocity feels like progress internally (“Look at the changelog!”). But from the user’s perspective, it’s often noise: more knobs to ignore, more ways to get lost, more chances for something to break. Users don’t get 10x more cognitive bandwidth because you can ship 10x more. Attention, tolerance for change, and desire for predictability don’t scale with model size.
AI accelerates output, not comprehension.
And there’s a predictable leadership trap here: if building gets cheaper, organizations tend to spend the savings on building even more. Every new pocket of capacity gets converted into more shipping.
But the constraint hasn’t moved. You’ve just increased the risk of burying users under change.
Speed isn’t the goal. Learning is.
Here’s the nuance I wish were louder in AI discourse: speed can matter a lot—but only when it shortens the loop between question and truth.
Ship to learn, not to look busy.
Speed is leverage when it compresses this cycle:
build → show → watch → learn → decide → simplify
If you can prototype faster, test assumptions sooner, and converge more quickly on what actually works, that’s a real advantage.
Otherwise, speed just amplifies direction. And if the direction is wrong, you don’t get “10x better.” You get it wrong faster.
A quick reality check for any team building with AI
If you want to know whether you’re bottlenecked by speed or by empathy/judgment, ask:
- If we shipped this 10x faster, would users be 10x happier? If no, speed isn’t the constraint.
- Do we understand user fear/confusion well enough to delete half the UI? If no, empathy is the constraint.
- Are we learning from real behavior weekly, not quarterly? If no, your bottleneck is cycle time to insight. (AI can help a lot here—by making experiments cheaper and faster.)
This is the uncomfortable truth: most teams aren’t limited by the ability to build. They’re limited by the ability to choose.
“But AI will design the product, too.”
A common counterargument is: “Sure, AI makes code cheap—but soon it will also handle product decisions. It will analyze behavior, read feedback, and propose the roadmap. It will have taste.” I’m skeptical for a simple reason: taste isn’t a purely rational process. Taste lives in:
- the conviction that something should be simpler even when users aren’t asking
- the willingness to delete 80% of a product people are “fine” with
- the courage to be opinionated instead of maximalist—to say no more than yes
AI is extremely good at generating options—what looks plausible based on what already exists. It can expand the menu.
But someone has to be accountable for the call. Because the real product decisions are values decisions hiding in plain sight:
- When do we automate vs ask permission?
- What do we optimize for: speed, safety, clarity, or power?
- What do we default to—and what do we make reversible?
- Who are we building for, and who are we willing to disappoint?
Models can suggest. They can’t own consequences.
And breakthrough products are often non-consensus bets: choosing a future the data can’t fully justify yet. AI can help you explore that future. It can’t take responsibility for betting on it.
“Okay, but speed is the bottleneck sometimes.”
Yes. Timing matters.
There are real categories where moving fast is decisive: platform shifts, distribution windows, network-effect races, competitive parity moments, and security incidents.
But even there, “speed” isn’t “ship everything.” It’s: ship the right thing with urgency.
If you win a land-grab by shipping something people don’t trust, you haven’t won—you’ve just scaled churn.
Where AI will help (and it’s still big)
None of this means AI is overhyped. It’s powerful—just not in the way most people assume.
AI will be extremely good at:
- commoditizing boilerplate (docs, routine code, basic support)
- reducing operational drag (glue work, repetitive internal tasks)
- speeding iteration inside clear constraints (prototypes, QA, analysis, refactors, migrations)
- leveling the floor (helping more people reach “good enough” in writing, design, and analysis)
These are meaningful gains. They eliminate drudgery. They give teams time back. They raise the baseline.
But they don’t automatically give you:
- deeper understanding of users’ emotional realities
- better judgment about which problems matter
- the discipline to say no to 90% of what you could build
- the courage to simplify when complexity is “working” internally
Those still require human leadership, human values, and human closeness to real problems.
What leaders should do in an AI-saturated world?
If you lead a product org, the question isn’t “How do we ship more?” You can. Everyone can.
The question is: what will you do with the extra capacity that doesn’t make your product worse?
A few changes that actually move the system:
- Protect empathy loops. Make it normal for PMs, designers, and engineers to watch users struggle in real time—not just read summaries. AI can compress information; it cannot replace the emotional confrontation with reality.
- Make simplicity a deliverable. Celebrate deletion. Reward teams for removing steps, shrinking onboarding, reducing settings, and choosing smarter defaults. If AI makes it cheap to add, leadership has to make it prestigious to subtract.
- Use AI to explore, then decide like a human. Let models generate options, drafts, prototypes, and edge cases. But keep accountability for the call—what you build, what you don’t, what you break, and which tradeoffs you accept.
- Optimize for learning speed, not feature speed. Shorten the time from question → evidence → decision. If you’re going to go faster, go faster at finding out what’s true.
- Treat defaults as ethics. Your product is the set of choices you make for people. Be intentional about what you automate, what you hide, and what you make easy to reverse.
- Anchor the org on a point of view. In a world where anyone can clone functionality, differentiation comes from opinions: who you’re for, what you refuse to do, what “better” means to you. AI can help execute a point of view; it can’t supply one.
AI won’t fix the hard part
There’s a comforting fantasy in tech that the “real” work is building—the code, the features, the artifacts. If that were true, AI would feel like salvation: finally, a way to do more of the real work, faster.
But spend time with actual users, and it becomes obvious: the hard part isn’t building.
It’s knowing what not to build. It’s being honest about whether what you’ve built truly helps someone in the messy, context-heavy details of their life.
AI doesn’t move that frontier. It just makes it easier to spray attempts at the wall.
The teams that win in an AI-saturated world won’t be the ones that ship the most AI features. They’ll be the ones who stay closest to real human pain—and use AI as leverage, not as a substitute for thinking, noticing, and caring.
AI may change a lot. It won’t change the core truth: building great software is an act of empathy.
The bottleneck is still attention—the kind that notices, cares, and chooses.
Join the Guild Waitlist →