The space of minds | karpathy


Place of mind karpathi















November 29, 2025

The space of intelligence is large and animal intelligence (the only kind we know of so far) is just one point (or a small cloud), resulting from a very specific type of adaptation that is fundamentally different from our technology.

G6zymj4a0AMNJkJ
Above: Humorous depictions of human vs. AI intelligence can be found on X/Twitter, this is one of my favorites.

Animal Intelligence Adaptation Pressure:

  • The spontaneous and continuous stream of consciousness of an embodied “self”, a drive for homeostasis and self-preservation in a dangerous, physical world.
  • Perfectly adapted to natural selection => strong innate drive for power-attainment, status, dominance, reproduction. Multiple packaged survival heuristics: fear, anger, disgust, …
  • Fundamentally social => EQ, huge amounts of computation devoted to theory of mind of other agents, relationships, coalitions, alliances, friend and foe dynamics.
  • Exploration and exploitation tuning: curiosity, entertainment, play, world models.

Meanwhile, LLM intelligence optimization pressure:

  • Most observation bits come from statistical simulation of human text => “Shape Shifter” Token Tumblr, statistical simulators of any region of the training data distribution. These are primitive behaviors (symbolic traces) on top of which everything else rests.
  • Rapid improvements are being made by RL on problem distribution => innate desire to guess the underlying environment/task in order to collect task rewards.
  • Selected rapidly by massive A/B tests for DAU => Craving in-depth upvotes from the average user, sycophancy.
  • Too many forked/chained based on details of training data/task distribution. Due to extreme multi-tasking and even actively hostile multi-agent self-play environments, animals experience pressure for very high “normal” intelligence, becoming min-max adapted, where they fail. Any Work means death. In the deep optimization pressure sense, LLMs cannot handle many different difficult tasks out of the box (e.g. count the number of ‘r’s’ in strawberry) because failing to perform a task does not mean death.

The computational substrate is different (transformers vs brain tissue and nuclei), the learning algorithms are different (SGD vs ???), the current implementation is very different (continuous self learning vs LLM with knowledge cutoff that boots from fixed weights, processes tokens and then terminates). But most importantly (because it dictates the asymptotics), the optimization pressure/objective is different. LLM has been shaped much less by biological development and much more by professional development. This greatly reduces the survival of the tribe in the jungle and solves the problem a lot/gets upvotes. The LLM is humanity’s “first contact” with non-animal intelligence. Except it’s muddled and confusing because they’re still reflectively digested human artifacts contained within it, which is why I tried giving it a different name at first (ghosts/spirits or whatever). People who create good internal models of this new intelligent entity will be better equipped to reason about it today and predict its characteristics in the future. Those who don’t will continue to think of it in the wrong way, like an animal.

driven by bears ʕ•ᴥ•ʔ



<a href

Leave a Comment