AI and getting trapped inside the box
New tools raise the bar on the need for creative thinking.

Geoff Wilson
A recent essay from AI Snake Oil (here) made a surprising claim: that artificial intelligence might slow down science. Not because it’s inaccurate, but because it’s too good at being accurate–within the wrong frame.
The author points to the centuries-long dominance of the geocentric model of the universe. At one point, predictions of planetary motion based on Earth being the center of the universe were astonishingly precise. But they were also deeply wrong. The math worked. The understanding didn’t.
The essay poses an unsettling idea: if AI had existed then–and been trained on those models–we might have clung to the wrong theory even longer. The system would optimize the pattern, not question the premise.
That idea should raise eyebrows in business, too.
Because if AI has the potential to reinforce flawed scientific models under the guise of precision, what could it do to our strategic and operational models in the business world? Could the efficiency of the tool blind us to the fragility of the box it lives in?
Let’s rewind to the 1990s.
Long-Term Capital Management (LTCM) was an elite hedge fund run by some of the most decorated minds in finance — including two Nobel Prize winners. Their trading strategy was based on elegant models, airtight math, and decades of data. They had accounted for everything — except what had never happened.
When the Asian financial crisis hit, and then the Russian debt default followed, the market behavior fell outside the box. LTCM’s models didn’t break–they simply didn’t apply. Within months, the fund teetered on the edge of collapse, threatening to drag the global financial system down with it.
That story wasn’t about incompetence. It was about conviction–conviction in a model that worked, until it didn’t. A belief in historical precision, at the cost of hypothetical imagination.
Which brings us back to AI.
Artificial intelligence excels at pattern recognition. It’s built to identify structure, predict based on precedent, and optimize for success–all within the observed dataset. But what happens when the next critical insight lives outside the dataset? Or when the market moves in a way it never has before? Or when a first-principles challenge is needed, not a predictive output?
What happens to outside-the-box thinking when the most powerful tools we use only look inside the box?
That’s not a knock on the technology. AI can enhance insight, increase productivity, and surface connections humans might miss. But it’s also a mirror–reflecting what’s been done, not necessarily what should be done next.
In that way, AI can become like the geocentric model: precise in the wrong direction.
Or like LTCM: confidently accelerating toward the edge of a cliff because the GPS has never seen a cliff before.
Strategic leaders–the real kind, the ones who hold the long arc of value creation and risk–can’t afford to outsource the act of questioning. AI can suggest. It can support. But it cannot wonder. It cannot imagine the inverse, the anomaly, the edge case that no one has seen but everyone should fear.
The job of leadership–now more than ever–is to ask: what if the model is wrong?
What if the future doesn’t look like the past?
What if we are right about everything, except the thing that matters most?
Outside-the-box thinking is not a luxury. It is, increasingly, the only kind of thinking that will matter. Because as the boxes get smarter, the need for human insight–uncomfortable, abstract, imperfect insight–only grows.
We should use AI. But we should also stay skeptical. Every model has a boundary. Every dataset has a blind spot. And every organization that overfits to efficiency risks underfitting to reality.
So yes, use the tool.
But keep one eye on the horizon–and one foot out of the box.
What do you think? How do we keep a foot outside the box?
Leave a Reply
Want to join the discussion?Feel free to contribute!