Robert Pirsig wrote a 1974 philosophy book disguised as a motorcycle trip. In it, he describes friends who want the benefits of their BMW without understanding how it works. When the machine misbehaves, they experience it as hostile and unpredictable.
That's what most enterprise AI adoption looks like.
The BMW Problem
The Sutherlands are educated, cultured people. They've made a specific choice: mechanical knowledge is beneath them, or beside the point, or someone else's job. So when their BMW breaks down, they're helpless. Worse, they're resentful. They blame the machine for their own chosen ignorance.
Pirsig watches this with a kind of quiet grief. He's not judging their intelligence. He's observing a particular kind of alienation that follows inevitably from use without understanding.
The pattern repeats with eerie consistency in AI adoption.
In February 2024, Air Canada was ordered to pay damages after its website chatbot told a grieving passenger he could retroactively claim bereavement fares. Advice that contradicted the airline's own policy page. When sued, Air Canada argued the chatbot was "a separate legal entity that is responsible for its own actions." The tribunal called this "a remarkable submission" and ruled against the airline: "It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot."
The company had deployed technology it didn't understand operationally, then tried to disclaim ownership when it failed. That's the BMW problem as corporate policy.
That same year, McDonald's quietly pulled AI voice ordering from over 100 drive-throughs after viral videos showed the system adding hundreds of dollars of McNuggets to orders while customers begged it to stop. The technology worked, in the sense that it processed speech and generated tickets. What it lacked was any framework for recognizing when it was failing. So did the executives who deployed it to paying customers.
That's the BMW problem in the AI era. Benefits without understanding. Capability without diagnostic literacy. And when something breaks, no framework for figuring out why.
The Comprehension Gap
A majority of adults now use large language models. But most don't understand how these systems work at even a basic conceptual level.
I'm not talking about transformer architectures. I mean the operational basics: these systems produce responses based on word patterns learned from internet text. They don't track truth by default. They don't have reliable grounding unless you build it in. They're prediction engines, not verification engines.
Understanding this changes how you use the tools. It explains hallucination (plausible doesn't mean true). It explains why some tasks work better than others (pattern density in training data). It explains why prompt engineering matters (you're shifting probability distributions).
Not understanding it means you're the Sutherlands. When something fails, you'll have no idea why. You'll blame the model. You'll miss the actual problem, which is almost always in the system around the model.
The Abstraction Objection
Here's the obvious counterargument: Do people need to understand internal combustion to drive a car?
No. And most companies don't need to understand attention mechanisms to use AI productively. Abstraction is the whole point of tools.
But there's a difference between abstraction and blindness. You don't need to rebuild an engine, but you do need to know when the check engine light means "get this looked at" versus "stop driving immediately." You need enough diagnostic literacy to recognize failure modes before they cascade.
With AI, the equivalent is knowing what these systems are bad at. Knowing that confidence in output doesn't correlate with accuracy. Knowing that "sounds right" and "is right" can diverge dramatically. Knowing when to verify, what to verify, and how.
That's not deep technical knowledge. It's operational awareness. And most organizations don't have it.
The Quality Trap
Pirsig's book has a central obsession: What makes something good?
Quality, he argues, is what makes us know which measurements matter in the first place. It precedes analysis. You recognize it before you can explain it. The good mechanic notices the slightly wrong sound. The good writer feels when a sentence rhythm fails. The good engineer won't ship code they don't understand, even if it passes tests.
Quality emerges from care. From sustained attention. From genuine engagement with the work.
AI doesn't care. It has no stake in the outcome. This isn't criticism. It's just description.
The trap is thinking care can be automated. It cannot. And once you know what to look for, the absence becomes obvious.
AI-generated memos that "recommend cross-functional alignment" without naming a single tradeoff. Code that passes unit tests but fails on edge cases that went unexplored, readability, or invariants the tests don't cover. Strategy decks that list "high-impact use cases" without constraints, data requirements, or ownership.
These outputs look like quality. The grammar is correct. The structure is reasonable. The professional tone is present. What's missing is the thing underneath: someone who cared enough to ask whether this actually makes sense.
If you're using AI to draft and you're not bringing judgment to the editing, you're not augmenting yourself. You're outsourcing the part that matters.
Gumption Traps for the AI Era
Pirsig coined a term that deserves wider use: gumption traps.
Gumption is the quality of enthusiastic engagement that makes good work possible. When you have it, problems become interesting. When you lose it, everything becomes a slog. Gumption traps are the things that drain this resource: anxiety, ego, boredom, impatience.
The AI-specific versions are predictable. Smart people fall into all of them.
The Delegation Trap. Difficult task? Hand it to AI. This feels efficient. It's also how you stop developing the judgment required to evaluate AI output. Engineers who lean heavily on code completion for years struggle to write basic algorithms from memory. They delegated the difficulty, and the difficulty was where the learning lived.
The Speed Trap. AI produces output fast. This creates pressure to move at AI speed, which means skipping the slow, iterative process where quality develops. First draft becomes final draft. The question that would have revealed the flaw never gets asked. Even small error rates compound fast. In long chains of AI-assisted decisions, quality fails by accumulation.
The Evaluation Trap. As AI output proliferates, evaluation becomes more important and harder to develop. You can only judge good writing if you've struggled to write. You can only spot flawed analysis if you've done enough analysis to internalize the patterns. Consuming AI output without this foundation is like being a restaurant critic who has never cooked.
The Ego Trap. This one cuts both ways. Refusing to use AI because "real professionals don't need it." Or claiming AI-assisted work as entirely your own. Both distort your relationship to the actual work.
Pirsig's antidote was attention. Noticing when you've fallen into a trap. Pausing. Recalibrating. The antidote hasn't changed.
Maintenance Thinking
Pirsig elevated maintenance from chore to philosophy. The person who maintains their own motorcycle develops a relationship with it. They understand its quirks. They can diagnose problems. They aren't alienated from their own tool.
What does it mean to maintain your own cognitive capacities when AI handles more of your work?
For individuals, something like this:
Regular unplugged practice. Write without AI assistance. Code without completion. Solve problems from first principles. Not as asceticism. As maintenance. Keep the muscles functional.
Calibration checks. Before accepting AI output, occasionally generate your own version first. Compare. Where did the AI do better? Where did you? This builds evaluation capacity you cannot develop any other way.
Depth gates. Before delegating a task: Do I understand this well enough to judge the output? If not, do it manually first. At least once.
For organizations, the requirements are structural:
Someone has to own evaluation. If nobody is accountable for catching confident nonsense, you'll produce confident nonsense at scale. You need feedback loops that surface failures before they reach customers. You need review processes built around AI's failure modes, not legacy QA checklists. You need metrics that punish plausible-but-wrong, not just metrics that reward volume.
Most AI failures aren't individual weakness. They're missing organizational scaffolding.
The Tool Will Not Maintain You
Pirsig wasn't writing about motorcycles. He was writing about how to engage with technology without being diminished by it. How to use tools while remaining the source of quality they cannot provide.
The Sutherlands wanted benefits without engagement. They got alienation instead.
The path forward isn't romantic rejection of AI tools. That's just Sutherland-style alienation pointed in the opposite direction. The path forward is understanding deep enough for genuine competence. Evaluation rigorous enough to catch failures. Care sustained enough to produce work that actually matters.
AI scales output. Judgment doesn't scale itself.
A note on process: This piece, of course, was drafted, revised and spell checked with AI assistance, which, given the subject matter, seems worth pointing out explicitly. The ideas, judgment calls, and final editorial decisions are mine. The irony isn't lost on me. But the argument was never "don't use AI." It was "bring enough judgment to catch your own confident nonsense." I hope I have.

