AI and the calculator analogy
The rise of AI is often compared to calculators. People feared we’d stop thinking then too. But calculators don’t hallucinate and they don't need prompt engineering. Time to revisit the analogy!
When calculators were introduced, people said the same thing: students would stop thinking.
You've heard the analogy a thousand times. Just as calculators freed us from arithmetic, Artificial Intelligence will free us from the tedious parts of thinking, such as summarizing, proofreading and outlining. Supposedly, the important stuff will stay untouched: strategic thinking, decision-making, metacognition…
Early in the boom of Large Language Models, the analogy quickly caught on. But does it hold up? Can we really divide thinking into distinct layers, with the tedious tasks at the bottom and meaningful tasks at the top? What do we lose when we skip the “unglamorous” parts of thinking? It’s time to revisit the analogy.
What’s really at stake here is the idea of cognitive offloading.
Cognitive offloading: what the analogy gets right
Cognitive offloading is the use of a tool or technique to reduce the mental effort required for a task that would otherwise rely on memory or internal reasoning. The concept emerged in 2016, well before LLMs became widely available.
Arguably, the most iconic argument against cognitive offloading was made by Socrates, in one of Plato’s dialogues. Socrates claimed that writing would make people forgetful, because they would no longer recall things by themselves. This reasoning sounds almost ridiculous today, but it echoes in modern fears around AI.
Millennia later, calculators raised similar concerns. They take over arithmetic, reducing cognitive load and allowing students to focus on problem-solving. This shift helped math and science education move away from rote drills and toward conceptual understanding.
As with writing, education systems didn’t merely adopt calculators as new tools; they rewrote their curricula around them. LLMs offer the same promise beyond numbers. While calculators took over the numbers, LLMs are coming for everything else.
Why the analogy breaks down
Here’s where the analogy stops working.
Calculators don't hallucinate. LLMs generate text based on statistical patterns, not truth. When they happen to be accurate, it’s usually because their training data was. However, they're built to sound plausible, not to be right. Recent research (published this week) suggests that hallucinations are here to stay. Calculators, by contrast, are reliable. They do one thing and do it right.
Skill erosion is harder to spot with LLMs. When you forget how to do long division, you know. However, losing your ability to write or reason clearly is more difficult to detect, especially if LLMs make you feel like you're still capable. This is the paradox of outsourced thinking. As long as the output looks good, you assume the skill is intact. Until you have to perform without an LLM.
The boundaries are blurry. When using an LLM, you might think you're just getting help with phrasing, but you've probably handed over the structure, tone and even the authorship of the ideas.
Calculators offload clearly defined tasks. With LLMs, the delegation is murkier. Where does the cognitive effort end and the cognitive offloading begin?
Also, LLMs don’t have a fixed domain. In theory, there are no limits to the kinds of cognitive tasks you can delegate to a LLM. And that makes accountability slippery. This is true in classrooms, at work, even in your own mind. Who’s thinking? Who’s learning? With calculators, this wasn’t a question.
And that’s just the cognitive side. I haven't touched on copyright, environmental costs or the messy job market. I won’t dig into those, but they’re hardly background noise and let’s not pretend any of that looks like the calculator era either.
Cultural shift towards higher-order thinking?
Zooming out reveals something more fundamental. Tools not only change how we work, but also what we admire as a society. Calculators shifted the value placed on arithmetic. AI is shifting the value of many thinking tasks we used to consider signs of mastery and intelligence: writing, explaining, analyzing and remembering (though memory has been losing ground for decades).
If a machine can do it, we start to wonder whether it’s worth learning at all. This is not just a technical shift. It’s cultural. The idea of cognitive effort itself is being re-priced. And faster than we’re ready for. Optimists argue that this is the whole point: let LLMs handle the grunt work. Humans can then focus on strategic thinking or decision-making.
The calculator analogy suggests that, just as calculators shifted the value from arithmetic to mathematical reasoning, LLMs are shifting the value from memory, writing and explanation to higher-order thinking. But it’s not that easy.
All that hinges on a fragile idea: that thinking tasks come in neat layers. That you can automate the base and keep the top. That the pyramid won’t collapse.
The counterpoint: the hierarchy of thinking skills is a myth
Higher-order thinking needs lower-order skills. The notion that thinking is organized in a neat hierarchy (often depicted as a pyramid) is unfounded. Experts have pointed this out for years.
Knowledge of facts and memory are necessary for critical thinking and creativity. They are intertwined. As Carl Hendrik so elegantly puts it: “You can’t connect the dots if you don’t have any dots”. You can’t analyze or be strategic about something you don’t understand. You can’t reflect on thoughts that aren’t grounded in factual knowledge. If automation erodes those, the rest falters too.
This is why teaching facts, practicing recall, and making learning purposefully difficult still matter. Especially now. A student who relies on AI for summaries or arguments won’t develop the mental framework needed to critique ideas or apply them in new contexts.
LLMs can help build these skills too, but their default use as chatbots doesn’t encourage it. The most beneficial uses of AI for learning are ones that create desirable difficulties: the productive cognitive struggles that make learning stick. These difficulties are also what leads to higher-order thinking in the first place.
And that’s where the AI-calculator analogy truly breaks down. With LLMs, skill erosion happens by default unless they're used with intent. Even so-called “higher-order thinking” depends on the skills being eroded, so the idea of “making room” for it falls apart. Calculators had no such side effect.
When the tool becomes the skill
Although “higher-order thinking” isn’t necessarily winning the AI cognitive wars (as the calculator analogy implies), another type of knowledge is clearly gaining ground. Mastering LLMs is becoming a valuable skillset in itself. Whether it’s prompt engineering, context engineering, agentic workflow design or GPT-building, the ability to steer LLMs toward precise or scalable outcomes is already an ever-growing body of knowledge separating amateurs from experts.
Yes, calculators also required basic competence. But it stopped there. There was no ecosystem of advanced calculator design for the general public, no body of theory, no careers built on fine-tuning them beyond those of calculator builders. The knowledge forming around LLMs is different in scale, complexity and cultural significance. This isn’t just hype.
In education, this raises new questions about what Generative AI literacy should look like. I've written about the paradox of openness and the need for scaffolding in AI-driven learning design. Recent research also suggests that advanced students may actually benefit from using passive AI chatbots, because it fosters higher metacognitive awareness. The way forward likely lies in a careful mix of structure and autonomy: teaching students not just with AI, but about how to master it.
The end of the analogy is just the beginning
The AI-calculator analogy is comforting, but that comfort hides its flaws.
The next time someone brings up the AI-calculator analogy (and trust me, someone will) you’ll be ready with better answers. Yes, both tools offload. And yes, both tools have a significant impact on education.
But calculators don’t hallucinate. No one made calculator engineering a top corporate skill. And their unreflective use doesn’t quietly erode learners’ capacity to think. LLMs are a different story.
Calculators changed how we approach math. AI is taking it further by reaching into how we communicate, the professional skills in demand and the kinds of cognitive effort we still consider meaningful.
This isn’t just a shift in work and education. It’s a shift in what we value as thinking. The real question isn’t what we think with AI.
It’s about what we now call thinking at all.
Keep learning
Nerd out with AI
Prompt suggestions. Always ask follow-up questions
Everyone compares the boom of AI to the time when calculators came out. Can you break down where that analogy holds and where it completely falls apart, especially when it comes to learning?
I want to avoid skill erosion when using AI tools. Can you help me design a workflow where I still do the hard thinking myself, but use the AI in a way that supports learning?
Act as a teacher and test me using retrieval practice on “cognitive offloading and the AI-calculator analogy.” Ask me 6 questions, one at a time, only proceeding when I answer. Make them progressively harder.
Links
Cognitive offloading: This article from 2016 is credited with introducing the concept of “cognitive offloading,” even if no one at the time anticipated the rise of LLMs. The idea has gained new urgency, because it captures what’s quietly happening when we let AI handle the very tasks that shape how we learn, reason and remember.
The most important memory is still the one inside your head: In this excellent piece, Prof. Dr. Carl Hendrick unpacks the consequences of AI-driven skill erosion, with a sharp focus on memory. Even though his outlook is pessimistic, his argument is compelling: it is detailed, grounded and clear-eyed about the cognitive risks AI introduces.
Great post!
Just a quick thought, even when using a calculator, you’re constantly thinking, “does this answer make sense? Did I miss a decimal? Did I multiply instead of divide?“ just like you so eloquently explained in this article, human oversight and reflection are still necessary.