My goal with this article is to give you the cliff notes on the old argument of Ai, and yes I mean old. Ai is not new, and I am not talking 1950’s old I am talking 1630’s old.
No matter if you are a policy maker, educator, or software developer, if you can not learn from the past you will not be able to build the future. Worry not though, I can get you caught up lickety split.
And it starts with one question.
can machines think?
I know it may seem like this whole Ai hub bub started only in the last 3 years but believe it or not René Descartes posed this very question on machine consciousness in 1637.
In his Discourse on The Method he outlined a ‘Turing Test’ that predates the Alan Turing by over 300 years:
“If there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible, there would still remain two most certain tests whereby to know that they were not therefore really men. Of these the first is that they could never use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others: for we may easily conceive a machine to be so constructed that it emits vocables, and even that it emits some correspondent to the action upon it of external objects which cause a change in its organs; for example, if touched in a particular place it may demand what we wish to say to it; if in another it may cry out that it is hurt, and such like; but not that it should arrange them variously so as appositely to reply to what is said in its presence, as men of the lowest grade of intellect can do”
- Desecrates, Discourse on The Method
I would like to mention some of this is a breeze through of my much more in depth research from a section in my published article ‘post digital governance’ where I actually find some talks all the way back to Aristotle about giving “tools intent”
…but for now, lets stay focused.
This discourse of duality and the mind continued on and two hundred years later a brilliant women by the name of Ada Lovelace, whose work laid foundations for much of our modern programing mathematics, chimed in on the debate in the 1830s (still haven’t made it to the 1900s yet!)
Ada Lovelace read Charles Babbage’s plan for the Analytical Engine and saw a new test coming.
She knew symbols can stand for more than numbers and saw beauty in the mathematics that bled into art.
She stated:
"Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent".
- Lovelace, Note G
However despite this wonder, she advised against the ability for it[machines] to think or “originate” ideas. She knew the Engine (and thus future 'machines’) could shuffle symbols with strict rigor, yet “has no pretensions whatsoever to originate anything”.
Note G (one of Lovelace’s more famous writings) did not mock the machine; it marked the line between rule‑following and rule‑making.
And what is wild, and also a testament to the amazing mind of Lovelace, is that she predicted the ability of computers to use language or at the very least the symbols that make up language like we do in NLP(natural language processing) research today!
"Many persons who are not conversant with mathematical studies, imagine that because the business of the [analytical] engine is to give its results in numerical notation, the nature of its processes must consequently be arithmetical and numerical, rather than algebraical and analytical. This is an error. The engine can arrange and combine its numerical quantities exactly as if they were letters or any other general symbols; and in fact, it might bring out its results in algebraical notation…".
- Lovelace, Note G
We can continue to follow this discourse another 100 years later to a man by the name of Alan Turing, he has many writings but one I find particularly interesting is his work Computing Machinery and Intelligence published in 1950 (and this is the year where everyone else tells you “AI” started).
One of Turing's arguments within it hinges on the idea that the perception of originality in human thought could itself be an illusion. Referencing the quote “there is nothing new under the sun” He suggests that what we consider 'original work' might simply be an extension or transformation of ideas implanted in us through education and experience.
Turing implies that if human originality can be viewed as a reshaping of existing knowledge and principles, then the ability of machines to 'surprise' us or create something 'new' should not be summarily dismissed. We are taught language, math, concepts, ideologies, school and society in this way is the “program” and our brains are the “machines.”
Turing, thus, shifts the focus from the limitations of machines to the limitations of human understanding and observation in Lovelace's era.
He explained that the advancements in technology reveal capacities in machines that were previously unimagined or unrecognized.
He actually references Lovelace directly:
" [I do not] assert that the machines in question had not got the property[thinking], but rather that the evidence available to Lady Lovelace did not encourage her to believe that they had it."
-Turing, Computing Machinery and Intelligence
He said the mark of thought is not originality but performance that meets human expectation, a relationship rather than a quality. If a program fools a clear observer in open talk, we must treat it as thinking.
He developed something called “the Turing test” as a way to find out if a machine is “truly thinking” that isn’t to far off the original concepts that Rene Descartes came up with nearly 400 years earlier.
Today the argument returns with gradient‑based models that write, talk, and draw. Large language models predict the next token, yet their answers break free from the training data in ways no engineer can map.
Just this March (2025) a paper was published showing definitive evidence of Language models passing the turing test.
What is interesting about this paper is not just that machines have passed it but that people refuse to accept the evidence or (accept it AS evidence) that machines can think.
Some reply that the score is shallow, that the model lacks insight. Others rightly so say that the structures of the human mind are so far different fundamentally from these super computers it still isn’t even a contest.
Personally I think the “brain vs CPU structure” argument is the one that holds the strongest ground as it is quite impressive that an AI model needs a nuclear reactor worth of energy to be trained and scaled and all my brain needs is a sandwich and coffee.
Yet despite this, many of the same arguments made against AI tests are actually the same charges long stalked about human psychometrics.
An IQ test grades outputs, not inner life yet we have relied quite heavily to decide “how intelligent” some one is.
If that is enough for people, why demand more from artefacts?
Turing actually again warned of this double rule: “You cannot be sure that some casual remark of your own has not started the idea off.” The remark could come from a teacher or from pre‑training data; the puzzle stands.
Usually this is where most people start talking about the abstract, the concepts of personality or ethics as a thought experiment but lucky for you, I decided to act on those thought experiments in my own research at the Edinburgh Futures Institute.
Currently, I probe the moral stance of large language models with the same surveys that psychologists give people, a method I spell out step-by-step in my article (and soon to be published paper) Mind the Moral Echo.
If GPT-4 reads as very open and low on neuroticism while Claude leans guarded, every draft, lesson plan, or ordinance they help write will drift that way unless the user corrects. The scores do not prove the models think, but they show that a rule-bound system can carry a moral tone of its own, and that tone shapes our words each time we ask it to finish a sentence.
But here is the thing, no mater how much evidence exists, no matter how many scales and tests are passed, none of it matters.
The deeper issue is dualism itself.
Descartes saw two kinds of stuff (mind and body); modern work keeps erasing the gap
We say “I think” with a certainty no scan can grant a circuit. It is what brought Descartes to even bring the statement “I think therefore I am”.
Descartes said spark only exists if I says so, Lovelace felt the spark and set a guard around it. Turing said the spark may be unknowable and chose to test behavior alone.
The Chinese Room thought experiment tried to freeze the debate in 1980.
John Searle pictured himself shuffling Chinese characters by rule and claimed the symbols meant nothing to him; therefore, he said, no program can understand. The reply was swift: the room is not the man but the whole system, rules, paper, and clerk together. When the system answers in fluent Chinese, the job of meaning is done.
Behavior counts; inner ghost optional.
Cybernetics had reached the same point three decades earlier. Norbert Wiener watched anti‑aircraft guns track fast planes and saw in feedback a seed of purpose. Purpose looks like mind from outside. Feedback is only matter in a loop. This merger of intent and mechanism now drives every thermostat and autopilot. It also drives gradient‑descent minds that tune billions of weights until error falls to zero.
Behaviorism pressed further. B. F. Skinner warned that talk of inner life adds nothing to prediction. He trained pigeons to guide missiles by pecking at a target. The birds produced steering curves as smooth as calculus.
Embodiment complicates the story.
A chat model floats in text space; a robot must grip, balance, and bruise.
When Boston Dynamics machines dance, critics say the choreography is pre‑computed. But when the same hardware slips on ice, adjusts, and recovers, the line between reflex and thought blurs again. Sensors feed loops, loops refine maps, maps trigger motors.
The loop is the agent.
Extended mind theory erases boundaries further. Andy Clark argues that a phone or notepad can be part of cognition when we rely on it as we rely on cortex. If pen and paper can extend mind, why not a cloud model that holds your calendar, your drafts, your private lexicon? Off‑loading memory to silicon is already routine; off‑loading reasoning may be next.
A factor called g explains test scores; we still debate if g lives in tissue or in statistics.
Interpretability labs dissect the weights like lesion studies in cortex. They ablate one head and see syntax collapse. They trace a path from neuron 139781 to the token "therefore".
If the outcome is all we need, why privilege neurons over conditional probability?
We are willing to let complex biological systems like our own body surprise us yet don’t allow that same leeway to machines just as physical as our own brain but made of silicon instead of carbon.
Alan Turing again seems to have made a perfect quote to encapsulate this paradox (or fallacy):
"The view that machines cannot give rise to surprise is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind, all consequences of that fact spring into the mind simultaneously with it".
-Turing, Computing Machinery and Intelligence
The stand‑off is clear. On one side we have measurable behavior, explained by mechanism, reproduced in code. On the other we have felt presence, the first‑person spark no metric can pin down.
Mind-body Dualism.
Hilariously enough, we have circled back to Descartes.
He needed a sharp divide to ground certainty: I think, therefore I am.
Four centuries later the divide dissolves under empirical strain. Matter can host loops that mimic thought; mind can share load with matter. Dualism survives as intuition, not as law.
Large Language models bring the dispute to the surface. The weights do not store verse; they store a field of possibilities.
Sampling collapses the field into one path, as quantum amplitude collapses into one spot on a screen.
We call the path “creative” when it surprises us.
Each new map shrinks the mystery yet also shows how deep the maze runs. We close one black box only to find a smaller dark room inside.
Yet despite this complexity what remains is the ethical core: responsibility.
A clock cannot be guilty; but a mind can.
When a model drafts policy or grades exams, who owns the outcome? Lovelace warned that the engine follows orders; blame the operator. Turing foresaw learning machines that evolve beyond initial orders. The operator may lose the thread.
Governance must evolve too.
So what lies ahead? One path refines the tests. Psychometricians build adaptive batteries that probe abstract reasoning beyond public corpora. Early trials with multimodal LLMs hint at plateaus: performance peaks, stalls, and jumps with scale.
Another path peers inside the black box. Mechanistic interpretability maps circuits to concepts as lesion studies map cortex to function. First wins, edge detectors, syntax heads, echo early neuroscience.
Neither path has solved the riddle.
The machine still obeys rules it did not write; the human still explains acts it cannot trace.
The discourse endures because it is less a quest for an answer than a mirror we hold to ourselves.
When we ask if a machine can think, we sharpen what we mean by thinking.
But more importantly, it makes us ask about thoughts outside our self.
I need someone else to read this article for it to mean anything, for it to be understood by others. We need to accept that animals or trees or the earth as a whole is full of thinking breathing things that deserve even a fraction of our attention before they have it.
Is this the relativity that Einstein spoke so fondly of?
How can I know how fast something is moving if there is no point of reference?
How can I know how conscious I am if there is no one else to talk to?
Perhaps it is not simply I think, there for I am,
Perhaps,
You think, therefore I am.
Really, it’s not about whether machines are thinking, but whether we’re ready to admit that consciousness isn’t a human monopoly, it’s part of something bigger, and we’re only just beginning to see it.
Thank you! Especially for the last sentence!