Polanyi's Paradox
We know more than we can tell. What happens when you automate what you can't articulate?
Comment Savons-nous?"We know more than we can tell."
- Michael Polanyi, The Tacit Dimension, 1966
You can recognize a face in a crowd instantly. But try to describe that face precisely enough for someone else to recognize it. You can ride a bicycle without thinking. But try to write instructions that would teach someone who's never seen one.
This is Le Paradoxe de Polanyi: the gap between what we can do and what we can explain.
In 1966, this was philosophy. In 2026, it's the central challenge of AI deployment.
The Tacit Dimension
Polanyi identified knowledge that exists below the surface of articulation:
-
Explicit KnowledgeRules, procedures, documented processes. "If X, then Y."✓ Easily automated
-
Implicit KnowledgeContext that could be articulated but usually isn't. "Everyone knows that..."⚠ Requires deliberate extraction
-
Tacit KnowledgeSkills and intuitions that resist articulation entirely. "I just know."✗ Cannot be directly automated
Most automation projects focus on explicit knowledge-the documented procedures, the written rules. But organizational capability lives largely in the other layers.
What Tacit Knowledge Looks Like
The Automation Trap
When organizations automate, they typically:
1. Document the process: "First, check the invoice number. Then, verify the amount..."
2. Build the system: AI that follows the documented steps.
3. Remove the human: "The process is automated now."
But the documented process was never the complete process. It was the articulable surface of a much deeper capability.
The Scar Pattern
Organizations discover Polanyi's Paradox after the expert retires. The replacement has all the documented procedures. Everything should work. But edge cases start failing. Clients start complaining. Something is missing that no one can name, because it was never named in the first place.
Where Tacit Knowledge Dies
Process optimization: "We removed the redundant review step." But that step was where the veteran's intuition caught problems the rules missed.
Departmental automation: "The AI handles it now." But the AI learned from the documented process, not from the undocumented judgments.
Knowledge transfer: "We captured everything in the wiki." But the wiki contains what could be written, not what could only be shown.
Expert retirement: "We did a thorough handover." But handovers transfer explicit knowledge. Tacit knowledge requires apprenticeship, and there wasn't time.
The AI Amplification
AI systems make Polanyi's Paradox worse in a specific way:
The Confidence Problem
When a human encounters a case outside their tacit expertise, they often feel uncertain. They hesitate. They ask for help. AI systems don't have this safety valve. They process novel cases with the same confidence as familiar ones, because they don't know what they don't know.
The human expert's uncertainty was a feature, not a bug. It was tacit knowledge about the limits of their own tacit knowledge. AI systems lack this second-order awareness.
Practical Implications
Before automating any process, ask:
• What do the experts do that isn't in the documentation?
• What questions do they ask that aren't in the checklist?
• What do they notice that they can't explain how they notice?
• What would a junior miss that a senior would catch?
The answers to these questions point to the tacit dimension-the part of the capability that won't survive naive automation.
Strategies for Tacit Knowledge
-
Shadow PeriodsBefore automating, have the AI "shadow" human experts. Don't train it only on outcomes-observe the process, including the pauses, the questions, the escalations.
-
Uncertainty SurfacesDesign systems that expose when they're in unfamiliar territory. If the AI can't flag its own uncertainty, build external signals that humans can interpret.
-
Expert RetentionKeep human experts in the loop longer than seems necessary. The tacit knowledge they carry is worth more than the efficiency of removing them.
-
Failure ArchaeologyWhen automation fails, don't just fix the bug. Ask: what tacit knowledge would have prevented this? Document the undocumented.
-
Apprenticeship ArchitectureFor critical capabilities, maintain human-to-human knowledge transfer alongside AI systems. Some knowledge requires a living chain.
The Deeper Question
Polanyi's Paradox raises a question that most AI strategies ignore:
"What are we losing that we don't even know we have?"
The most dangerous tacit knowledge losses are the ones no one notices until much later. The capability degrades slowly. The edge cases accumulate. The institutional wisdom erodes.
By the time someone says "we used to be better at this," the tacit knowledge is gone, and no one remembers exactly what it was.
Connection to the Scar Taxonomy
Polanyi's Paradox is the theoretical foundation of what we call Experiential Scars-knowledge that exists only because someone lived through something, and dies when they leave.
Every scar type ultimately traces back to knowledge that resisted articulation:
• Experiential scars: "You had to be there."
• Contextual scars: "It depends on things we never wrote down."
• Relational scars: "Only she knew how to work with them."
The scar is what remains after tacit knowledge is lost. Polanyi explains why the loss is inevitable without deliberate preservation.
What This Means for Leaders
For AI strategy: Budget for tacit knowledge extraction as part of automation projects. It takes longer than documenting processes. It's also more valuable.
For organizational design: Maintain redundancy in expertise. A single expert retiring shouldn't mean critical tacit knowledge disappears.
For risk management: Factor in tacit knowledge loss as a risk category. It won't appear in traditional risk frameworks, but it compounds quietly.
For humility: Accept that some capabilities can't be fully automated-not because of technical limitations, but because of what we are. Some things we know we cannot tell.
← Back to Scar Taxonomy