I have seen many people using the “AI and Calculator” comparison when arguing about using AI or integrating it into school, education or even governance.
You know, the whole “calculators didn’t ruin math it made it better” or making fun of the “you’ll never have a calculator in your pocket” statements.
And I’ll be honest, I have been known to use that comparison before too, but it never felt right, it was more apples to oranges rather than apples to apples if you catch my drift.
However, I stumbled upon an older “technology” that I think is a much more aligned comparison.
I found it when reading about programmers in the 1950s who thought the world was ending.
Not because of nuclear war or economic collapse, but because someone invented the compiler.
If you don't know what a compiler is, stay with me - understanding this sometimes-forgotten revolution has some pretty wild (and ironic) similarities to the current AI transformation that's seemingly upending everything from how we work to how we teach.
Also, before I continue, I could make a list of things this article is going to give you, how it can make you more money or set you up to get promoted but honestly the most important thing to understand in it is this:
We've been here before.
The issues with AI are not new, not even a little.
The arguments, the fears, the resistance - it's all happened before.
And the people who understood the patterns in these fears didn't just survive. They thrive.
So, lets look at the patterns I found, specifically with Compilers
Compilers? What are those?
Back in the 1950s, if you wanted a computer to add two numbers, you couldn't just write "add 2 + 2." The first digital calculator wouldn’t exist until 1962.
You had to speak the machine's language - a brutal sequence of ones and zeros, or if you were lucky, cryptic codes like "LDA 1000" (load accumulator with the value at memory address 1000, or if you need it in more laymen’s terms “hey computer do this thing here” but with numbers).
Every single instruction had to be spelled out in excruciating detail.
Programmers literally counted bytes, managed memory addresses by hand, traced the path of every electron through the circuits. (Now for someone like me this sounds like crazy fun, but you can see where this can become tedious or complicated.)
John Backus, who would later lead the FORTRAN project, described it as "hand-to-hand combat with the machine." The work was brutal, demanding an intimate knowledge of the hardware's peculiar limitations and bizarre difficulties.
This created what Backus called a "priesthood of programming."
These early programmers saw themselves as guardians of mysteries far too complex for ordinary mortals. Their identity wasn't just tied to their skills - it was inseparable from the difficulty itself. The harder the work, the more special they were.
Then along came this radical idea: What if we could write something closer to human language - like "X = Y + Z" - and have a program automatically translate it into the machine's language? This translator was called a compiler.
It was like having a brilliant assistant who could take your rough notes and turn them into the precise, tedious instructions the computer needed.
The priesthood was not amused.
When Grace Hopper suggested that computers could understand something resembling English, the reaction was swift and dismissive. "You can't do this," they told her, "because computers don't understand English." The machine was a calculator, nothing more. To suggest otherwise was to fundamentally misunderstand its nature. (sound familiar yet? don’t worry it gets better)
Even after Hopper built a working compiler - the A-0 system - the resistance continued.
"Nobody believed that," she later recalled. "I had a running compiler and nobody would touch it. They told me computers could only do arithmetic."
This wasn't just stupidity or stubbornness (although there is plenty of that). It was something more human. When your expertise is built on mastering complexity, simplification feels like erasure.
When your value comes from being one of the few who can speak the machine's language, teaching it to understand yours feels like betrayal.
The arguments against compilers centered on three core fears, each revealing something essential about how we respond to abstraction.
First came the cult of efficiency.
In an era when computers had kilobytes of memory and every machine cycle mattered, hand-optimized assembly code was a necessity. Early compilers produced bloated, inefficient output. Even as late as 1965, Honeywell admitted that "a highly skilled assembly programmer could still beat COBOL's output in efficiency." For many applications, this was the difference between possible and impossible.
But efficiency was just the surface argument. Deeper down was the fear of losing control.
Assembly programmers manipulated memory directly, traced execution paths instruction by instruction. The compiler was a black box that severed this intimate connection. "If I don't code it down to the metal," they asked, "how can I trust what's happening?"
The deepest resistance, though, was philosophical.
Compilers suggested that machines could perform an act of translation - taking human-like expressions and converting them into machine logic. This challenged the fundamental identity of the computer as a dumb calculator and the programmer as its essential translator. If the machine could translate, what was the programmer?
Again, if you have been following 90% of LinkedIn posts about AI, this fear is half of the arguments.
Yet within a decade, the debate was over.
FORTRAN succeeded not by winning arguments but by delivering results. A thousand lines of assembly could be written in fifty lines of FORTRAN. A 1958 survey found that over half of all code running on IBM computers was compiler-generated.
The priesthood hadn't been convinced - they'd been outnumbered.
The Pattern Repeats
If you have been paying attention to most LinkedIn posts about AI, we're living through the same pattern. The arguments against AI-generated code or work echo the compiler debates with ironic precision.
Where programmers once worried that compiled code couldn't match hand-written assembly, today we hear that "AI writes code that is functional but not adaptable" and that "modifying AI-generated code is often harder than writing from scratch."
The efficiency argument has become a quality argument, but the structure is identical.
And in this structure, we find that the control argument has evolved too.
Now it is important to note here, I under the compiler was deterministic or given the same input and compiler version, it produced the same output every time.
Once you understood its rules, you could trust its translations. The black box had logic, even if that logic was hidden. (despite many not believing that at first)
AI is probabilistic, trained on patterns first and then rules come after (opposite of the compiler in some ways). It can "hallucinate" - confidently producing code that looks right but contains subtle, critical flaws. One day it performs brilliantly, the next it fails at basic tasks.
Developers describe AI tools as "unpredictably hit or miss," capable of sending even experienced engineers down "wrong rabbit holes." The black box is no longer just opaque - it's inconsistent, now at first glance this is a great argument against it and when you hear the philosophical resistance it only seems stronger…
Most of what I hear day to day goes something like this:
"Developers who rely too much on AI might stop improving their own coding and problem-solving skills." or “We fear creating pure 'copy-paste' coders with zero understanding."
However, I argue this isn't a bug to be fixed but an inherent property of how these systems work and through the patterns of the system, we can find deterministic rules.
And rules become standards, standards create scale.
This scale is exactly why major AI companies are taking the world by storm, they didn’t solve all the problems of AI, they solved most of the problems of scale behind AI.
They found deterministic rules in probabilistic systems.
Which is exactly why my research on AI psychology works, psychology itself is based on this same discovery.
Humans are often probabilistic systems, at least our minds can be, but we have made deterministic rules (personalities, mental diagnoses, etc.) based on them.
The ironic thing here is, these “AI” companies and “experts” don’t always understand this.
When compilers arrived, adoption was chaotic, driven purely by economic necessity.
The same economic panic drives adoption today - companies implementing AI not from understanding but from fear of being left behind.
But while industry races blindly forward, education sits paralyzed.
Most institutions are stuck debating whether to ban ChatGPT or pretend it doesn't exist.
They're fighting yesterday's war while their students are already living in tomorrow's world.
Walk into any faculty meeting about AI and you'll hear the same tired debates. How do we detect it? Should we go back to paper exams? Maybe we need proctoring software that watches their screens?
It's like watching someone try to bail out the Titanic with a teaspoon. The water's already over the deck.
Meanwhile, their students are using AI for everything. Not just homework - for thinking.
They're having conversations with it, learning from it, building mental models with it.
The gap between what educators think is happening and what's actually happening would be funny if it weren't so tragic.
Only a handful of educators - and I mean really just a handful - are asking the questions that actually matter: If AI can do the homework, what exactly are we teaching?
What should we be testing when the old tests are obsolete?
These few aren't trying to detect AI use or prevent it. They're recognizing that we're adding another layer to human thought itself (or replacing it and finding ways to keep it alive), and that changes everything about how we measure learning.
The crazy thing is, these educators (including my self) often feel like heretics in their own institutions.
They'll pull me aside after meetings, speaking in hushed tones about experiments they're running. "I let my students use AI," they'll confess, like they're admitting to a crime. "But I make them document their process. Show their thinking. Critique the output."
They get it. They understand that banning AI from education is like banning calculators from math class. Sure, you can do it. But what exactly are you preparing students for?
A world that doesn't exist?
Problem is, to have the successful AI use, you need to be transparent in how you use it.
Unfortunately this leads you into a lovely Paradox I like to call:
The Transparency Trap
Institutions everywhere are implementing "transparent AI use" policies. The logic seems sound: if you use AI, disclose it. Transparency builds trust. Honesty matters.
Except it doesn't work that way.
Recent research from the University of Arizona, involving over 5,000 participants, found that disclosing AI use actually decreases trust.
When students learned a professor used AI for grading, trust dropped by 16%. When clients discovered designers used AI, trust fell by 20%. Even among people who regularly use AI themselves. (Also, I want to make it clear, we are still in the compiler conversation, just bear with me for a moment this is all important)
The researcher, Oliver Schilke, explained it this way:
"If you're transparent about something that reflects negatively on you, the trust benefit you get might be overshadowed by the penalty for what you revealed."
We mandate disclosure to maintain integrity, but disclosure itself is seen as evidence of cutting corners, of lacking genuine expertise.
We've created a system where honesty is punished and concealment is rewarded, at least in the short term.
For educators, this is a nightmare. You want students to use AI tools effectively and ethically. You implement policies requiring disclosure. But those same policies incentivize hiding AI use, because students know - correctly - that their work will be devalued if they're honest.
For professionals, it's equally complicated. Use AI to be more productive, but admit it and watch your perceived competence plummet.
Hide it and risk being caught, which research shows drops trust even further.
This paradox reveals something profound about our current moment. We don't just need new tools or new skills. We need a better value system. One that recognizes AI-augmented work as potentially equal to or better than unaugmented work.
We need to separate ourselves from the priesthood.
This brings me to what I think is really happening, what this historical parallel with compilers reveals about our future.
We're not simply adding another programming language or development tool.
We're adding a new layer of abstraction to human thought itself.
Now I get it, you have been hearing AI hype left and right and how its changing the world.
I wrote an entire article on this, but truly it is not just the tech itself but the scale behind it that is important.
Just as compilers let us think in terms of algorithms rather than machine instructions,
Generative AI lets us think in terms of intentions rather than implementations.
The shift from assembly to FORTRAN was really a shift from telling the computer how to do something to telling it what to do.
The compiler handled the how.
AI represents the next step: from what or how to why.
We describe the outcome we want, the constraints we care about, and let the system explore the solution space.
The goal is not to make programming easier, though AI might. It's about operating at a fundamentally different level of abstraction.
Instead of writing code, we're writing specifications. Instead of debugging implementations, we're debugging intentions. Instead of optimizing algorithms, we're optimizing prompts.
Moreover the skills don't disappear - they transform.
Knowing assembly made you a better FORTRAN programmer because you understood what the compiler was doing.
Understanding traditional programming will make you a better AI orchestrator because you'll know when the output makes sense.
But the real value, the real work, shifts upstream. From syntax to semantics. From construction to architecture. From answers to questions.
The New Literacy
I keep thinking about what programming will even mean in five years. Hell, what it means right now is already shifting under our feet.
Just as we no longer think of programming as moving bits in registers, future programmers might not think of it as writing syntax in files.
They'll think of it as... honestly, I don't have the words. Neither do you. That's the weird part about living through a paradigm shift - the language doesn't exist yet.
We're all out here making it up as we go, pretending we understand what's happening.
We don't.
We're like those assembly programmers in 1957, absolutely certain that real programming meant counting bytes, while some kid with FORTRAN was about to make their entire worldview obsolete.
The programmers of the 1950s couldn't imagine modern software because they were too busy managing memory addresses. We can't imagine what comes next because we're too busy arguing about whether AI-generated code "counts" as real programming.
It's the wrong question. It's always the wrong question.
Look, something is always lost when we abstract away complexity. The assembly programmers were right about that. When compilers took over, we lost that intimate knowledge of the machine. When high-level languages emerged, we lost the elegance of hand-optimized code.
The old-timers will mourn this loss.
They'll gather at conferences and reminisce about when programmers "really" understood their code. Just like their predecessors gathered and reminisced about when programmers "really" understood their machines.
They're not wrong. They're just... irrelevant.
Because here's what also happens: we gain the ability to think thoughts that were unthinkable before.
To solve problems that weren't even recognizable as problems.
To build systems so complex that no human could hold them in their head, yet still reason about them.
That's what this new abstraction layer offers. Not laziness. Not obsolescence. But the chance to be more human - to focus on why we're building something instead of drowning in the mechanics of how.
The compiler revolution taught us one thing above all: fighting abstraction is like fighting gravity.
Exhausting and pointless.
The real question is what you choose to preserve as the ground shifts beneath you.
The curiosity? Keep that. The rigor? Essential. The deep understanding? Always. The fetishization of difficulty? Let it go.
I don't have all the answers about what comes next. We're all just out here experimenting, failing, learning. Writing the rules of a new language while we're speaking it.
But I know this:
The most interesting problems, the most valuable contributions, will come from those who can drop down to the metal when needed but spend most of their time in the clouds.
The priesthood is dead. Long live the priesthood.
Love it, Jake, very interesting history. What's the book? It's a great analogy because of course I still say I'm programming even if it's high level, and I trust the compiler.
But I admit I'm still one of the type of people you play devil's advocate with: I just hate to read LLM generated text (not what this post is about completely, but I saw it elsewhere and didn't comment at the time). I think it's because text is so akin to our personal voice that we actually do our conversation in. If I can detect that someone generated text with LLM and then passed it off as their own, instead of trusting their authentic human expression, why should I give it my authentic human time and attention? But if I can't detect it, and I get some sort of value from the text, then I guess I don't have any ideological or moral opposition.
I'm enjoying your work
This was such a good read!! Loved the compiler comparison