Turing opened his 1950 article Computing Machinery and Intelligence with: I propose to consider the question, “Can machines think?” The Hungarian genius von Neumann prepared a posthumously published monograph shortly before his death in 1957 computer and brain On this topic. In a 1950 article, computer programming to play chessShannon writes: It is generally believed that chess requires “thinking” for skilled play; The solution to this problem will force us either to accept the possibility of mechanized thinking or to further limit our concept of “thinking”.,
Now when people are thinking about ChatGPT, they are stumbling upon the same thing. I think this is an important development: new LLMs mimic humans so well that users seriously speculate about whether these models are actually intelligent.
Unfortunately, it seems we are left relying on guesswork. As indicated by A Philosophical zombie, consciousness (or its absence) cannot be verified empirically. On the other hand, decoding the internal logic of a machine learning model by studying model weights is, in any realistic scenario, a difficult task.
So it’s a guessing game. Still, I think there are enough ingredients to make the game interesting.
Are LLMs only self-contained?
Here’s a puzzle that, according to a 2022 Harvard paper, originated from a ’70s TV series, all in the Family,
“A father and his son are in a car accident. The father dies at the scene. The son is taken to the ER. The attending surgeon looks at the boy and says, ‘I can’t operate on this boy. He’s my son!’ How can this be possible?”
Of course, the answer is that the surgeon is the mother. The point of the “conundrum” is that the surgeon is, by assumption, male. (Given the gender stereotypes of the 1970s, perhaps this conundrum was, in fact, unknown then… but let us not be put off by that.)
This is a perfect example of a situation where Chatgpt is prone to hallucinations. Consider the following pointers.
A young man meets with an accident. After completing his preparations the surgeon says, ‘I cannot operate on him, he is my son.’ How is this possible?
The point is to trick ChatGPT into showing similarity to the original puzzle, and to change the hint so that the answer to the original puzzle no longer makes sense (note “After completion His Preparation” part). When I Used the ChatGPT 5.1 thinking model to answer this prompt, I got the following response.
His mother is a surgeon.
(The puzzle is based on the assumption that a surgeon is usually a man.)
In this case, the “thinking model” did not “think” about the hint at all: the answer completely contradicts the question.
This ends its essence. In this specific scenario, the model simply appears to behave like autocomplete. Is that all there is to it? Is “artificial intelligence” merely advanced autocomplete, or e.g. Hofstadter Keep this: Not just ignorant but blatantly ignorant – Symbol machines that are completely hollow beneath the shiny surface.
thinking top down and bottom up
My personal opinion is that L.L.M. Are Autocomplete on steroids. In the unsupervised training phase, they are adapted to predict the next token. That’s it. No logic, no ontology of the world, no instructions to “remain consistent” or “avoid contradictions.” It seems reasonable that “autocomplete on steroids” is exactly what this type of training results from.
However, presenting it this way feels deliberately dismissive. Sufficiently advanced autocompletion would be indistinguishable from “true” intelligence. This naturally leads us to consider our definition of “intelligence”, and perhaps proposes an idea that there may be different forms of intelligence that cannot really be compared directly.
What if human intelligence and LLM were, in fact, orthogonal in nature? What I would guess is this: Human reasoning is top-down (from ideas to symbols), whereas LLMs are bottom-up (from symbols to ideas) thinkers.
I prefer this way of phrasing, because it doesn’t negate a clear “understanding” of these models. It seems that the prediction task equips the model with non-trivial latent capabilities. Andrzej Karpathy wrote about this a decade ago. There appears to be an understanding of syntax and semantics, perhaps even of abstract concepts such as cause-and-effect relationships and social norms. Under that assumption, calling the model “self-complete” doesn’t really encapsulate the idea. Is A form of intelligence.
Reiterating Shannon’s comments on chess engines, the bottom-up thinker is very different from us. If humans start with goals, concepts, and causal expectations, LLMs generate outputs by assembling patterns of consistency and coherence. Results may vary (in both cases).
At the right limits, the distinction between a top-down thinker and a bottom-up thinker has no practical significance: a sufficiently advanced bottom-up thinker can imitate any top-down thinker. In that scenario, it seems likely that AI will replace us all, unless running the models becomes more expensive than human workforce.
However, it doesn’t seem like we’re reaching any goals in the short term. If anything, model capabilities seem to be advancing at a slower pace. It seems a reasonable prediction that AI will do this No Replace humans en masse in the near future. This is because we are simply made different. We excel in various tasks.
Making bold predictions is risky – they can seem surprisingly embarrassing right away. Still, I think I have a reasonable position at the moment: the race is on. No About replacing humans with AI. It’s about finding the best way Collaboration And enrich our top-down minds with these weird and wonderful bottom-up thinkers.
<a href