We’ve all down that AI rabbit hole: impressive outputs, surprisingly thoughtful replies, code generated on demand, photo realistic images, near-perfect translations… When you see what AI is capable it sometimes feels alive. But is feeling alive is the same as being alive?
Now let’s imagine a virus for a moment: Perfectly developed genetic material, encapsulated in a protein shell, carrying instructions for replication. It is elegant. It’s efficient. Hard to resist. And yet, without a host cell, it does nothing. No metabolism, no motion, no reproduction. Just potential waiting for an environment to make it look alive for some limited time.
Here is the ugly truth: Large Language Models are much the same!
They’re extraordinary feats of engineering — gigantic libraries of encoded knowledge in countless weights and high dimensional embeddings sleeping in datacenters the size of shopping malls with energy bills like thriving cities and waiting there to be triggered by a request to shine for a few seconds. When prompted, they can compose poetry, draft legal clauses, create algorithms or they create horoscopes and make up historic facts. So let’s be honest: they are frozen information potentials, not living systems. Mimicry at best and like viruses, they need external scaffolding to function. They do not think. They do not desire. They do not reflect. They only react when triggered.
And yet, here we are, watching smart people betting entire careers and billion dollar companies on the imminent emergence of "intelligence" through brute-force scaling.
Add more parameters, increase training data, feed more compute. Surely, intelligence will crawl out of the math.
But life — and by extension, intelligence — doesn’t emerge from scale. It emerges from structure!
Biological cells are messy factories of constant interaction: membranes that define boundaries, energy cycles, error correction, feedback loops, and most importantly — intrinsic goals. That gets even more complex when they form organs like a brain and forced to work together. They don’t wait to be asked. They permanently respond to tension: too little nutrient, too much waste, environmental shifts. They sense, decide, and act in ways that preserve themselves.
Current AI systems do none of that. And layering them into robot chassis or coupling them with tool-use APIs or RAGs doesn’t change that fundamental fact.
One needs to be crystal clear: This isn’t a pessimistic take. It’s clarity that helps to navigate hype and distinguish corporate mysticism from fundamental scientific progress.
Real Intelligence — artificial or biological — requires more than the ability to answer questions by recombination of once digested training data.
It needs:
A persistent self-model (Who am I, and what state am I in?)
Intrinsic motivation (Why do I care? What tension am I reducing?)
Goal generation (What should I pursue, independent of external requests?)
Recursive self-monitoring (Am I aligned with my goals, or should I adjust?)
Semantic grounding (Am I describing the world or interacting with it?)
Until systems have these structures, scaling will only make them better performers in predictive hallucination, not thinkers and creators. They will simulate intelligence in increasingly sophisticated ways, but they will remain — like viruses — dependent on external scaffolding to do anything at all.
Why does this matter?
Because mistaking viral templates for life leads to poor decisions. It leads to investment bubbles, misplaced fears of imminent runway AGI, and public narratives that frame “world knowledge databases with natural language interface” as sentient beings. It let smart people fall pray for esoteric scientific theories like the “computational theory of the mind”. It let people confuse simulation of intelligence with the application of independent thinking. Worse, it distracts from the real work of building artificial entities that could one day qualify as intelligent systems.
That real work isn’t just bigger models or more data. It’s architectures that support tension, self-regulation, and curiosity. It’s engineering digital "metabolism" — a systems that might feel incomplete unless they pursue their goals.
We’re not there yet. And we won’t get there by treating LLMs as living creatures or by anthropomorphizing GPT’s outputs as “thought”.
But here's the exciting part: acknowledging this doesn’t diminish what we’ve built.
It frames the challenge ahead. It clarifies the difference between the show and the substance. And it invites us to ask better questions:
How do we design intrinsic motivation for artificial systems?
How do we architect continuous self-models that are able to independently evolve without permanent feedback and cret novel output?
How do we build frameworks that distinguish between describing an action and initiating one?
We don't get life from scaling viral templates. We get life from architecture. And the next breakthroughs won’t be measured in FLOPs. They’ll be measured in tension, feedback, and purpose. Because intelligence isn’t a party trick. It’s a structure that sustains itself when nobody’s watching.
Well written! I'm still amazed at how "randomly correct answers" have become a multi-billion dollar industry.