The Cost of Knowing What to Think, That's the Dilemma
STORY INLINE POST
Processing a million conversations with a language model cost US$75,000 a year ago. With specialized small models, it now costs less than US$800. We went from prohibitive to viable in months. Analyzing, synthesizing, modeling, writing, diagnosing: everything an organization paid dearly for because it required experience and specialization is collapsing in price.
It's the same dynamic the Industrial Revolution applied to mechanical work. Except now it's happening to intellectual work. The new bottleneck isn't purchasing power. It's judgment about what's worth processing.
In the film "Arrival" (2016), Dr. Louise Banks is a linguist recruited to communicate with an alien species that has just arrived on Earth. As she learns their language, she discovers it grants an extraordinary capability: seeing time non-linearly. She can perceive her entire life — past, present, and future — simultaneously. She knows every decision she will make and every consequence she will face.
When she confronts her most important choice, knowing her daughter will die young from an incurable disease, she must decide from knowledge of the consequences and their ramifications. She cannot avoid it. She must embrace what will come. Her knowledge didn't eliminate the need for judgment. It intensified it. Now she must choose consciously and with full awareness of consequences, rather than taking refuge in uncertainty. Access to all information doesn't automate knowing what to do with it. It makes decisions harder.
When Thinking Was Expensive
Exploring was a luxury reserved by the specialty of what one sought to address. Only teams with significant budgets could model multiple expansion scenarios, investigate adjacent fields before diversifying, test hypotheses about new segments before committing resources. Strategic planning was an exercise reserved for those who had consultants dedicated to thinking while internal teams executed.
Today, anyone with access to an agent can research an entire industry in hours, simulate financial projections under different assumptions in minutes, and generate competitive analysis that previously required weeks of specialized work.
But more information doesn't produce better strategies. It complicates them.
Studies in business decision-making show the same pattern: When information exceeds processing capacity, the quality of strategic decisions deteriorates. At first, more data improves planning. Beyond a certain point, additional information produces analysis paralysis.
Our brain operates in two ways: fast using recognized patterns, or slow using deliberate analysis. Under information overload, the brain automatically switches to fast mode, even when the strategic decision requires slow mode. This is a physiological response to excess inputs.
Organizations face the same situation as Louise Banks. Language models grant access to market analysis that was previously impossible to process. An executive can see thousands of market entry scenarios simultaneously, explore hundreds of business model variants in parallel, and analyze decades of consumer behavior in seconds.
But that doesn't tell you which market to attack first, which business model to implement, which consumer signal indicates structural change and which is temporal noise.
The Difference Has Shifted
Competitive advantage no longer lies in who can pay for more analysis but in who formulates their strategy better through introspection and context selection. Not in who generates more growth scenarios but in who knows which ones to discard without wasting resources. Not in who models more sophisticated projections but in who defines the right assumptions before building the model.
An AI agent can research a hundred approaches to organizational resilience in minutes, generate dozens of digital transformation frameworks, and produce multiple investment theses for new geographies. But which of these options redefines your competitive position? Only those who know what they're looking for can decide that.
And knowing what you're looking for strategically requires having bet wrong enough times to recognize when a bet has solid foundation versus when it's an attractive narrative without substance. That's strategic judgment that can't be downloaded.
In 2008, while the automotive industry was cutting production and some manufacturers were declaring bankruptcy, Hyundai faced the same crisis as everyone else. The data was identical: rising unemployment, frozen credit, collapsing sales.
Hyundai analyzed not just what was stopping sales but why potential buyers were afraid. They discovered the fear wasn't about the product, it was about being tied to credit after losing employment.
From that understanding, they launched Hyundai Assurance: buy a car and if you lose your job in the next year, we'll buy it back without affecting your credit history. Within weeks they had a presence at the Super Bowl. Sales took off while the competition remained paralyzed by the same information everyone had.
Hyundai didn't have exclusive data. They had better judgment about what to do with data available to everyone. Market uncertainty was identical. The ability to read it as a strategic opportunity rather than an operational threat was the differentiator.
This capability doesn't emerge from a predictive model. It emerges from leadership that has seen enough crises to recognize that consumer fear isn't an obstacle but an unarticulated need waiting for a solution.
Breadth Without Total Depth
AI agents enable rapid development of operational familiarity across diverse fields. A CFO who knows nothing about blockchain technology can, in an afternoon, understand enough to evaluate whether it represents a real opportunity for supply chain optimization or technology looking for a problem.
You don't become a technical expert. You become an executive with sufficient range of vision to see connections between domains that a technical specialist wouldn't cover.
An innovation leader can explore network theory principles, understand how small strategically connected nodes can have impact, and apply those principles to organizational design in days, not months of external consulting.
Access to technical information no longer limits the ability to formulate informed strategy. What limits it is knowing how to discern what is strategic signal and what is interesting but irrelevant technical noise.
A thousand simulations with flawed assumptions don't produce better strategy. They produce more sophisticated confirmation of the same strategic error. What determines the value of exploration isn't the model's power but the quality of assumptions you define before exploring. That's an act of judgment, not computational processing.
Producing vs. Deciding What to Produce
Aristotle distinguished between techne, the technical ability to execute something, and phronesis, the practical wisdom to decide what's worth executing and why.
Expert financial analysts have techne — they know how to build impeccable valuation models. But deciding which company is worth valuing deeply, at what point in the cycle, with what assumptions about structural changes in the sector — that requires phronesis.
Artificial intelligence is the greatest expansion of techne in business history. It can produce market analysis, financial projections, competitive assessments with technical competence that rivals or exceeds that of specialized human analysts.
But deciding what analysis is worth commissioning, what strategic constraints to impose on the model, what to do with the results, when to trust the output and when to discard it because the premises don't capture business reality that remains phronesis. It remains a leadership function. It remains scarce.
Louise Banks, with access to total temporal information, still needs phronesis to decide what to do with that knowledge. AI radically expanded our capacity to process analysis. But deciding what's worth analyzing and what to do strategically with what you find remains irreducibly human.
Strategic phronesis isn't developed in MBA programs. It's formed by leading real initiatives that fail or prosper, making mistakes with costly consequences, developing intuitions about timing and execution that only emerge from accumulated experience under pressure.
The current danger isn't that organizations lack access to sophisticated analytical capacity. It's that management teams confuse access to analysis with strategic capability. That they assume because they can generate a hundred growth scenarios, their strategic planning automatically improves. That they believe because they processed a thousand industry reports, their understanding of where to compete deepened proportionally.
Competitive advantage will be built by those who process with judgment about what's worth processing and what to do with the results. Judgment that forms at the intersection of explicit knowledge about the business and tacit wisdom about how markets actually function.
Judging Well Which Battles to Fight
The ancient Greeks understood this. That's why leadership education didn't end with technical mastery. It included philosophy, rhetoric, ethics — disciplines dedicated to developing judgment about what's worth doing, not just ability to execute what's decided.
We're going to have to return to those traditions in leadership development, relearning philosophy not as cultural ornament but as training in thinking critically about premises, assumptions, and second-order consequences.
In a world where processing analysis is trivial and cheap, knowing what to analyze— and why that analysis matters strategically — becomes the only advantage that can't be replicated by buying access to a more powerful model.













